ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.19808
40
2

Can Models Learn Skill Composition from Examples?

29 September 2024
Haoyu Zhao
Simran Kaur
Dingli Yu
Anirudh Goyal
Sanjeev Arora
    CoGe
    MoE
ArXivPDFHTML
Abstract

As large language models (LLMs) become increasingly advanced, their ability to exhibit compositional generalization -- the capacity to combine learned skills in novel ways not encountered during training -- has garnered significant attention. This type of generalization, particularly in scenarios beyond training data, is also of great interest in the study of AI safety and alignment. A recent study introduced the SKILL-MIX evaluation, where models are tasked with composing a short paragraph demonstrating the use of a specified kkk-tuple of language skills. While small models struggled with composing even with k=3k=3k=3, larger models like GPT-4 performed reasonably well with k=5k=5k=5 and 666.In this paper, we employ a setup akin to SKILL-MIX to evaluate the capacity of smaller models to learn compositional generalization from examples. Utilizing a diverse set of language skills -- including rhetorical, literary, reasoning, theory of mind, and common sense -- GPT-4 was used to generate text samples that exhibit random subsets of kkk skills. Subsequent fine-tuning of 7B and 13B parameter models on these combined skill texts, for increasing values of kkk, revealed the following findings: (1) Training on combinations of k=2k=2k=2 and 333 skills results in noticeable improvements in the ability to compose texts with k=4k=4k=4 and 555 skills, despite models never having seen such examples during training. (2) When skill categories are split into training and held-out groups, models significantly improve at composing texts with held-out skills during testing despite having only seen training skills during fine-tuning, illustrating the efficacy of the training approach even with previously unseen skills. This study also suggests that incorporating skill-rich (potentially synthetic) text into training can substantially enhance the compositional capabilities of models.

View on arXiv
Comments on this paper