ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.01106
  4. Cited By
Distilling Text Style Transfer With Self-Explanation From LLMs

Distilling Text Style Transfer With Self-Explanation From LLMs

2 March 2024
Chiyu Zhang
Honglong Cai
Yuezhang Li
Li
Yuexin Wu
Le Hou
Muhammad Abdul-Mageed
ArXivPDFHTML

Papers citing "Distilling Text Style Transfer With Self-Explanation From LLMs"

7 / 7 papers shown
Title
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
206
499
0
03 May 2023
SCOTT: Self-Consistent Chain-of-Thought Distillation
SCOTT: Self-Consistent Chain-of-Thought Distillation
Jamie Yap
Zhengyang Wang
Zheng Li
K. Lynch
Bing Yin
Xiang Ren
LRM
57
92
0
03 May 2023
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale
  Instructions
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
127
119
0
27 Apr 2023
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual
  Style Transfer with Small Language Models
Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models
Mirac Suzgun
Luke Melas-Kyriazi
Dan Jurafsky
VLM
77
65
0
23 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
306
11,909
0
04 Mar 2022
Text Detoxification using Large Pre-trained Neural Models
Text Detoxification using Large Pre-trained Neural Models
David Dale
Anton Voronov
Daryna Dementieva
V. Logacheva
Olga Kozlova
Nikita Semenov
Alexander Panchenko
39
71
0
18 Sep 2021
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style
  Transfer
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer
Huiyuan Lai
Antonio Toral
Malvina Nissim
27
56
0
14 May 2021
1