ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.02028
43
1

Fine-tuning Language Models for Recipe Generation: A Comparative Analysis and Benchmark Study

4 February 2025
Anneketh Vij
Changhao Liu
Rahul Anil Nair
Theo Ho
Edward Shi
Ayan Bhowmick
ArXivPDFHTML
Abstract

This research presents an exploration and study of the recipe generation task by fine-tuning various very small language models, with a focus on developing robust evaluation metrics and comparing across different language models the open-ended task of recipe generation. This study presents extensive experiments with multiple model architectures, ranging from T5-small (Raffel et al., 2023) and SmolLM-135M(Allal et al., 2024) to Phi-2 (Research, 2023), implementing both traditional NLP metrics and custom domain-specific evaluation metrics. Our novel evaluation framework incorporates recipe-specific metrics for assessing content quality and introduces approaches to allergen substitution. The results indicate that, while larger models generally perform better on standard metrics, the relationship between model size and recipe quality is more nuanced when considering domain-specific metrics. SmolLM-360M and SmolLM-1.7B demonstrate comparable performance despite their size difference before and after fine-tuning, while fine-tuning Phi-2 shows notable limitations in recipe generation despite its larger parameter count. The comprehensive evaluation framework and allergen substitution systems provide valuable insights for future work in recipe generation and broader NLG tasks that require domain expertise and safety considerations.

View on arXiv
@article{vij2025_2502.02028,
  title={ Fine-tuning Language Models for Recipe Generation: A Comparative Analysis and Benchmark Study },
  author={ Anneketh Vij and Changhao Liu and Rahul Anil Nair and Theodore Eugene Ho and Edward Shi and Ayan Bhowmick },
  journal={arXiv preprint arXiv:2502.02028},
  year={ 2025 }
}
Comments on this paper