ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.03349
  4. Cited By
RIFLE: Backpropagation in Depth for Deep Transfer Learning through
  Re-Initializing the Fully-connected LayEr

RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr

7 July 2020
Xingjian Li
Haoyi Xiong
Haozhe An
Chengzhong Xu
Dejing Dou
    ODL
ArXivPDFHTML

Papers citing "RIFLE: Backpropagation in Depth for Deep Transfer Learning through Re-Initializing the Fully-connected LayEr"

12 / 12 papers shown
Title
Co-occurrence is not Factual Association in Language Models
Co-occurrence is not Factual Association in Language Models
Xiao Zhang
Miao Li
Ji Wu
KELM
59
2
0
21 Sep 2024
Is Synthetic Image Useful for Transfer Learning? An Investigation into
  Data Generation, Volume, and Utilization
Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization
Yuhang Li
Xin Dong
Chen Chen
Jingtao Li
Yuxin Wen
Michael Spranger
Lingjuan Lyu
DiffM
28
4
0
28 Mar 2024
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off
  in Adversarial Training
Conserve-Update-Revise to Cure Generalization and Robustness Trade-off in Adversarial Training
Shruthi Gowda
Bahram Zonooz
Elahe Arani
AAML
21
2
0
26 Jan 2024
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual
  and Transfer Learning
Reset It and Forget It: Relearning Last-Layer Weights Improves Continual and Transfer Learning
Lapo Frati
Neil Traft
Jeff Clune
Nick Cheney
CLL
19
0
0
12 Oct 2023
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Ghada Sokar
Rishabh Agarwal
P. S. Castro
Utku Evci
CLL
40
88
0
24 Feb 2023
Improving Fine-tuning of Self-supervised Models with Contrastive
  Initialization
Improving Fine-tuning of Self-supervised Models with Contrastive Initialization
Haolin Pan
Yong Guo
Qinyi Deng
Hao-Fan Yang
Yiqun Chen
Jian Chen
SSL
18
19
0
30 Jul 2022
When Does Re-initialization Work?
When Does Re-initialization Work?
Sheheryar Zaidi
Tudor Berariu
Hyunjik Kim
J. Bornschein
Claudia Clopath
Yee Whye Teh
Razvan Pascanu
30
10
0
20 Jun 2022
Model soups: averaging weights of multiple fine-tuned models improves
  accuracy without increasing inference time
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
42
906
1
10 Mar 2022
Adaptive Consistency Regularization for Semi-Supervised Transfer
  Learning
Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Abulikemu Abuduweili
Xingjian Li
Humphrey Shi
Chengzhong Xu
Dejing Dou
22
77
0
03 Mar 2021
Borrowing Treasures from the Wealthy: Deep Transfer Learning through
  Selective Joint Fine-tuning
Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning
Weifeng Ge
Yizhou Yu
86
233
0
28 Feb 2017
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
252
9,134
0
06 Jun 2015
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
245
7,633
0
03 Jul 2012
1