ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.16205
  4. Cited By
Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for
  Improved Generalization

Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization

29 June 2020
Sang Michael Xie
Tengyu Ma
Percy Liang
ArXivPDFHTML

Papers citing "Composed Fine-Tuning: Freezing Pre-Trained Denoising Autoencoders for Improved Generalization"

5 / 5 papers shown
Title
Do we really have to filter out random noise in pre-training data for language models?
Do we really have to filter out random noise in pre-training data for language models?
Jinghan Ru
Yuxin Xie
Xianwei Zhuang
Yuguo Yin
Zhihui Guo
Zhiming Liu
Qianli Ren
Yuexian Zou
83
2
0
10 Feb 2025
Open Domain Generalization with a Single Network by Regularization
  Exploiting Pre-trained Features
Open Domain Generalization with a Single Network by Regularization Exploiting Pre-trained Features
Inseop Chung
Kiyoon Yoo
Nojun Kwak
VLM
16
0
0
08 Dec 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
25
20
0
14 Feb 2023
The Power of Scale for Parameter-Efficient Prompt Tuning
The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester
Rami Al-Rfou
Noah Constant
VPVLM
280
3,844
0
18 Apr 2021
Mixout: Effective Regularization to Finetune Large-scale Pretrained
  Language Models
Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
Cheolhyoung Lee
Kyunghyun Cho
Wanmo Kang
MoE
240
205
0
25 Sep 2019
1