ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.03350
  4. Cited By
Click: Controllable Text Generation with Sequence Likelihood Contrastive
  Learning

Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning

6 June 2023
Chujie Zheng
Pei Ke
Zheng Zhang
Minlie Huang
    BDL
ArXivPDFHTML

Papers citing "Click: Controllable Text Generation with Sequence Likelihood Contrastive Learning"

25 / 25 papers shown
Title
Robust Multi-Objective Preference Alignment with Online DPO
Raghav Gupta
Ryan Sullivan
Yunxuan Li
Samrat Phatale
Abhinav Rastogi
32
0
0
01 Mar 2025
Yi-Lightning Technical Report
Yi-Lightning Technical Report
01. AI
:
Alan Wake
Albert Wang
Bei Chen
...
Yuxuan Sha
Zhaodong Yan
Zhiyuan Liu
Zirui Zhang
Zonghong Dai
OSLM
97
3
0
02 Dec 2024
RSA-Control: A Pragmatics-Grounded Lightweight Controllable Text
  Generation Framework
RSA-Control: A Pragmatics-Grounded Lightweight Controllable Text Generation Framework
Yifan Wang
Vera Demberg
24
0
0
24 Oct 2024
AIPO: Improving Training Objective for Iterative Preference Optimization
AIPO: Improving Training Objective for Iterative Preference Optimization
Yaojie Shen
Xinyao Wang
Yulei Niu
Ying Zhou
Lexin Tang
Libo Zhang
Fan Chen
Longyin Wen
23
2
0
13 Sep 2024
LLM-based multi-agent poetry generation in non-cooperative environments
LLM-based multi-agent poetry generation in non-cooperative environments
Ran Zhang
Steffen Eger
LLMAG
29
5
0
05 Sep 2024
E2CL: Exploration-based Error Correction Learning for Embodied Agents
E2CL: Exploration-based Error Correction Learning for Embodied Agents
Hanlin Wang
Chak Tou Leong
Jian Wang
Wenjie Li
27
1
0
05 Sep 2024
Meta-Rewarding Language Models: Self-Improving Alignment with
  LLM-as-a-Meta-Judge
Meta-Rewarding Language Models: Self-Improving Alignment with LLM-as-a-Meta-Judge
Tianhao Wu
Weizhe Yuan
O. Yu. Golovneva
Jing Xu
Yuandong Tian
Jiantao Jiao
Jason Weston
Sainbayar Sukhbaatar
ALM
KELM
LRM
44
71
0
28 Jul 2024
SimPO: Simple Preference Optimization with a Reference-Free Reward
SimPO: Simple Preference Optimization with a Reference-Free Reward
Yu Meng
Mengzhou Xia
Danqi Chen
57
335
0
23 May 2024
Controllable Text Generation in the Instruction-Tuning Era
Controllable Text Generation in the Instruction-Tuning Era
D. Ashok
Barnabás Póczos
37
6
0
02 May 2024
Enhancing Pre-Trained Generative Language Models with Question Attended
  Span Extraction on Machine Reading Comprehension
Enhancing Pre-Trained Generative Language Models with Question Attended Span Extraction on Machine Reading Comprehension
Lin Ai
Zheng Hui
Zizhou Liu
Julia Hirschberg
19
1
0
27 Apr 2024
DESTEIN: Navigating Detoxification of Language Models via Universal
  Steering Pairs and Head-wise Activation Fusion
DESTEIN: Navigating Detoxification of Language Models via Universal Steering Pairs and Head-wise Activation Fusion
Yu Li
Zhihua Wei
Han Jiang
Chuanyang Gong
LLMSV
21
2
0
16 Apr 2024
CLHA: A Simple yet Effective Contrastive Learning Framework for Human
  Alignment
CLHA: A Simple yet Effective Contrastive Learning Framework for Human Alignment
Feiteng Fang
Liang Zhu
Min Yang
Xi Feng
Jinchang Hou
Qixuan Zhao
Chengming Li
Xiping Hu
Ruifeng Xu
24
0
0
25 Mar 2024
Socratic Reasoning Improves Positive Text Rewriting
Socratic Reasoning Improves Positive Text Rewriting
Anmol Goel
Nico Daheim
Iryna Gurevych
Iryna Gurevych
LRM
36
4
0
05 Mar 2024
Self-Debiasing Large Language Models: Zero-Shot Recognition and
  Reduction of Stereotypes
Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Tong Yu
Hanieh Deilamsalehy
Ruiyi Zhang
Sungchul Kim
Franck Dernoncourt
16
19
0
03 Feb 2024
On Prompt-Driven Safeguarding for Large Language Models
On Prompt-Driven Safeguarding for Large Language Models
Chujie Zheng
Fan Yin
Hao Zhou
Fandong Meng
Jie Zhou
Kai-Wei Chang
Minlie Huang
Nanyun Peng
AAML
36
46
0
31 Jan 2024
Self-Rewarding Language Models
Self-Rewarding Language Models
Weizhe Yuan
Richard Yuanzhe Pang
Kyunghyun Cho
Xian Li
Sainbayar Sukhbaatar
Jing Xu
Jason Weston
ReLM
SyDa
ALM
LRM
230
294
0
18 Jan 2024
Mitigating Unhelpfulness in Emotional Support Conversations with
  Multifaceted AI Feedback
Mitigating Unhelpfulness in Emotional Support Conversations with Multifaceted AI Feedback
Jiashuo Wang
Chunpu Xu
Chak Tou Leong
Wenjie Li
Jing Li
16
1
0
11 Jan 2024
Some things are more CRINGE than others: Iterative Preference
  Optimization with the Pairwise Cringe Loss
Some things are more CRINGE than others: Iterative Preference Optimization with the Pairwise Cringe Loss
Jing Xu
Andrew Lee
Sainbayar Sukhbaatar
Jason Weston
8
86
0
27 Dec 2023
Tackling Bias in Pre-trained Language Models: Current Trends and
  Under-represented Societies
Tackling Bias in Pre-trained Language Models: Current Trends and Under-represented Societies
Vithya Yogarajan
Gillian Dobbie
Te Taka Keegan
R. Neuwirth
ALM
37
11
0
03 Dec 2023
Large Language Models Are Not Robust Multiple Choice Selectors
Large Language Models Are Not Robust Multiple Choice Selectors
Chujie Zheng
Hao Zhou
Fandong Meng
Jie Zhou
Minlie Huang
17
214
0
07 Sep 2023
Bias and Fairness in Large Language Models: A Survey
Bias and Fairness in Large Language Models: A Survey
Isabel O. Gallegos
Ryan A. Rossi
Joe Barrow
Md Mehrab Tanjim
Sungchul Kim
Franck Dernoncourt
Tong Yu
Ruiyi Zhang
Nesreen Ahmed
AILaw
19
478
0
02 Sep 2023
Building Emotional Support Chatbots in the Era of LLMs
Building Emotional Support Chatbots in the Era of LLMs
Zhonghua Zheng
Lizi Liao
Yang Deng
Liqiang Nie
AI4MH
20
46
0
17 Aug 2023
CoNT: Contrastive Neural Text Generation
CoNT: Contrastive Neural Text Generation
Chen An
Jiangtao Feng
Kai Lv
Lingpeng Kong
Xipeng Qiu
Xuanjing Huang
44
23
0
29 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
301
11,730
0
04 Mar 2022
AugESC: Dialogue Augmentation with Large Language Models for Emotional
  Support Conversation
AugESC: Dialogue Augmentation with Large Language Models for Emotional Support Conversation
Chujie Zheng
Sahand Sabour
Jiaxin Wen
Zheng Zhang
Minlie Huang
14
55
0
26 Feb 2022
1