ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2210.10325
  4. Cited By
Improving Stability of Fine-Tuning Pretrained Language Models via
  Component-Wise Gradient Norm Clipping

Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping

19 October 2022
Chenghao Yang
Xuezhe Ma
ArXivPDFHTML

Papers citing "Improving Stability of Fine-Tuning Pretrained Language Models via Component-Wise Gradient Norm Clipping"

6 / 6 papers shown
Title
Prompted LLMs as Chatbot Modules for Long Open-domain Conversation
Prompted LLMs as Chatbot Modules for Long Open-domain Conversation
Gibbeum Lee
Volker Hartmann
Jongho Park
Dimitris Papailiopoulos
Kangwook Lee
19
62
0
08 May 2023
Measuring the Instability of Fine-Tuning
Measuring the Instability of Fine-Tuning
Yupei Du
D. Nguyen
18
4
0
15 Feb 2023
A Stability Analysis of Fine-Tuning a Pre-Trained Model
A Stability Analysis of Fine-Tuning a Pre-Trained Model
Z. Fu
Anthony Man-Cho So
Nigel Collier
23
3
0
24 Jan 2023
Improving Pre-trained Language Model Fine-tuning with Noise Stability
  Regularization
Improving Pre-trained Language Model Fine-tuning with Noise Stability Regularization
Hang Hua
Xingjian Li
Dejing Dou
Chengzhong Xu
Jiebo Luo
23
15
0
12 Jun 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
211
1,656
0
15 Oct 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
235
205
0
25 Sep 2019
1