ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.10056
  4. Cited By
Don't Half-listen: Capturing Key-part Information in Continual
  Instruction Tuning

Don't Half-listen: Capturing Key-part Information in Continual Instruction Tuning

15 March 2024
Yongquan He
Xuancheng Huang
Minghao Tang
Lingxun Meng
Xiang Li
Wei Lin
Wenyuan Zhang
Yifu Gao
    ALM
    CLL
ArXivPDFHTML

Papers citing "Don't Half-listen: Capturing Key-part Information in Continual Instruction Tuning"

5 / 5 papers shown
Title
Boosting LLM Translation Skills without General Ability Loss via
  Rationale Distillation
Boosting LLM Translation Skills without General Ability Loss via Rationale Distillation
Junhong Wu
Yang Zhao
Yangyifan Xu
Bing Liu
Chengqing Zong
CLL
33
1
0
17 Oct 2024
LM-Cocktail: Resilient Tuning of Language Models via Model Merging
LM-Cocktail: Resilient Tuning of Language Models via Model Merging
Shitao Xiao
Zheng Liu
Peitian Zhang
Xingrun Xing
MoMe
KELM
89
24
0
22 Nov 2023
CITB: A Benchmark for Continual Instruction Tuning
CITB: A Benchmark for Continual Instruction Tuning
Zihan Zhang
Meng Fang
Ling-Hao Chen
Mohammad-Reza Namazi-Rad
ALM
CLL
39
20
0
23 Oct 2023
Fine-tuned Language Models are Continual Learners
Fine-tuned Language Models are Continual Learners
Thomas Scialom
Tuhin Chakrabarty
Smaranda Muresan
CLL
LRM
134
116
0
24 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
303
11,730
0
04 Mar 2022
1