ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.08853
  4. Cited By
HyPe: Better Pre-trained Language Model Fine-tuning with Hidden
  Representation Perturbation

HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation

17 December 2022
Hongyi Yuan
Zheng Yuan
Chuanqi Tan
Fei Huang
Songfang Huang
ArXivPDFHTML

Papers citing "HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation"

15 / 15 papers shown
Title
See-Saw Modality Balance: See Gradient, and Sew Impaired Vision-Language Balance to Mitigate Dominant Modality Bias
See-Saw Modality Balance: See Gradient, and Sew Impaired Vision-Language Balance to Mitigate Dominant Modality Bias
Junehyoung Kwon
Mihyeon Kim
Eunju Lee
Juhwan Choi
Youngbin Kim
53
0
0
18 Mar 2025
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation
Pretraining Generative Flow Networks with Inexpensive Rewards for Molecular Graph Generation
Mohit Pandey
G. Subbaraj
Artem Cherkasov
Martin Ester
Emmanuel Bengio
AI4CE
66
1
0
08 Mar 2025
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Vatsal Gupta
Pranshu Pandya
Tushar Kataria
Vivek Gupta
Dan Roth
AAML
53
1
0
03 Jan 2025
VersaTune: An Efficient Data Composition Framework for Training Multi-Capability LLMs
Keer Lu
Keshi Zhao
Zheng Liang
Da Pan
Shusen Zhang
...
Weipeng Chen
Zenan Zhou
Guosheng Dong
Bin Cui
Wentao Zhang
VLM
26
0
0
18 Nov 2024
GFlowNet Pretraining with Inexpensive Rewards
GFlowNet Pretraining with Inexpensive Rewards
Mohit Pandey
G. Subbaraj
Emmanuel Bengio
AI4CE
36
3
0
15 Sep 2024
Beyond IID: Optimizing Instruction Learning from the Perspective of
  Instruction Interaction and Dependency
Beyond IID: Optimizing Instruction Learning from the Perspective of Instruction Interaction and Dependency
hanyu Zhao
Li Du
Yiming Ju
Chengwei Wu
Tengfei Pan
27
5
0
11 Sep 2024
Online Merging Optimizers for Boosting Rewards and Mitigating Tax in
  Alignment
Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment
Keming Lu
Bowen Yu
Fei Huang
Yang Fan
Runji Lin
Chang Zhou
MoMe
24
18
0
28 May 2024
How Abilities in Large Language Models are Affected by Supervised
  Fine-tuning Data Composition
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
Guanting Dong
Hongyi Yuan
Keming Lu
Chengpeng Li
Mingfeng Xue
Dayiheng Liu
Wei Wang
Zheng Yuan
Chang Zhou
Jingren Zhou
LRM
CLL
32
118
0
09 Oct 2023
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net
  Estimation and Optimization
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
Shoujie Tong
Heming Xia
Damai Dai
Runxin Xu
Tianyu Liu
Binghuai Lin
Yunbo Cao
Zhifang Sui
14
0
0
24 May 2023
VECO 2.0: Cross-lingual Language Model Pre-training with
  Multi-granularity Contrastive Learning
VECO 2.0: Cross-lingual Language Model Pre-training with Multi-granularity Contrastive Learning
Zhen-Ru Zhang
Chuanqi Tan
Songfang Huang
Fei Huang
VLM
24
5
0
17 Apr 2023
RRHF: Rank Responses to Align Language Models with Human Feedback
  without tears
RRHF: Rank Responses to Align Language Models with Human Feedback without tears
Zheng Yuan
Hongyi Yuan
Chuanqi Tan
Wei Wang
Songfang Huang
Feiran Huang
ALM
17
342
0
11 Apr 2023
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning
Yixuan Su
Fangyu Liu
Zaiqiao Meng
Tian Lan
Lei Shu
Ehsan Shareghi
Nigel Collier
131
57
0
07 Nov 2021
Raise a Child in Large Language Model: Towards Effective and
  Generalizable Fine-tuning
Raise a Child in Large Language Model: Towards Effective and Generalizable Fine-tuning
Runxin Xu
Fuli Luo
Zhiyuan Zhang
Chuanqi Tan
Baobao Chang
Songfang Huang
Fei Huang
LRM
136
178
0
13 Sep 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language
  Understanding
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
Alex Jinpeng Wang
Amanpreet Singh
Julian Michael
Felix Hill
Omer Levy
Samuel R. Bowman
ELM
294
6,950
0
20 Apr 2018
1