Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2008.11687
Cited By
What is being transferred in transfer learning?
26 August 2020
Behnam Neyshabur
Hanie Sedghi
Chiyuan Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"What is being transferred in transfer learning?"
38 / 88 papers shown
Title
Efficient Methods for Natural Language Processing: A Survey
Marcos Vinícius Treviso
Ji-Ung Lee
Tianchu Ji
Betty van Aken
Qingqing Cao
...
Emma Strubell
Niranjan Balasubramanian
Leon Derczynski
Iryna Gurevych
Roy Schwartz
28
109
0
31 Aug 2022
Exploring the Design of Adaptation Protocols for Improved Generalization and Machine Learning Safety
Puja Trivedi
Danai Koutra
Jayaraman J. Thiagarajan
AAML
20
0
0
26 Jul 2022
Digital-twin-enhanced metal tube bending forming real-time prediction method based on Multi-source-input MTL
Chang-Hai Sun
Zili Wang
Shuyou Zhang
Tao Zhou
Jie Li
Jianrong Tan
AI4CE
15
9
0
03 Jul 2022
Explaining the physics of transfer learning a data-driven subgrid-scale closure to a different turbulent flow
Adam Subel
Yifei Guan
A. Chattopadhyay
P. Hassanzadeh
AI4CE
27
41
0
07 Jun 2022
A Cross-City Federated Transfer Learning Framework: A Case Study on Urban Region Profiling
Gaode Chen
Yijun Su
Xinghua Zhang
Anmin Hu
Guochun Chen
Siyuan Feng
Jinlin Xiang
Junbo Zhang
Yu Zheng
35
5
0
31 May 2022
Linear Connectivity Reveals Generalization Strategies
Jeevesh Juneja
Rachit Bansal
Kyunghyun Cho
João Sedoc
Naomi Saphra
232
45
0
24 May 2022
Interpolating Compressed Parameter Subspaces
Siddhartha Datta
N. Shadbolt
32
5
0
19 May 2022
How do Variational Autoencoders Learn? Insights from Representational Similarity
Lisa Bonheme
M. Grzes
CoGe
SSL
DRL
24
10
0
17 May 2022
Learn2Weight: Parameter Adaptation against Similar-domain Adversarial Attacks
Siddhartha Datta
AAML
30
4
0
15 May 2022
Hybrid quantum ResNet for car classification and its hyperparameter optimization
Asel Sagingalieva
Mohammad Kordzanganeh
Andrii Kurkin
Artem Melnikov
Daniil Kuhmistrov
M. Perelshtein
A. Melnikov
Andrea Skolik
David Von Dollen
50
36
0
10 May 2022
CAiD: Context-Aware Instance Discrimination for Self-supervised Learning in Medical Imaging
M. Taher
F. Haghighi
Michael B. Gotway
Jianming Liang
OOD
SSL
11
31
0
15 Apr 2022
Fusing finetuned models for better pretraining
Leshem Choshen
Elad Venezian
Noam Slonim
Yoav Katz
FedML
AI4CE
MoMe
41
86
0
06 Apr 2022
Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
Polina Kirichenko
Pavel Izmailov
A. Wilson
OOD
34
314
0
06 Apr 2022
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Mitchell Wortsman
Gabriel Ilharco
S. Gadre
Rebecca Roelofs
Raphael Gontijo-Lopes
...
Hongseok Namkoong
Ali Farhadi
Y. Carmon
Simon Kornblith
Ludwig Schmidt
MoMe
46
909
1
10 Mar 2022
Climate Change & Computer Audition: A Call to Action and Overview on Audio Intelligence to Help Save the Planet
Björn W. Schuller
Ali Akman
Yi-Fen Chang
H. Coppock
Alexander Gebhard
Alexander Kathan
Esther Rituerto-González
Andreas Triantafyllopoulos
Florian B. Pokorny
30
1
0
10 Mar 2022
What Makes Transfer Learning Work For Medical Images: Feature Reuse & Other Factors
Christos Matsoukas
Johan Fredin Haslum
Moein Sorkhei
Magnus P Soderberg
Kevin Smith
VLM
OOD
MedIm
22
85
0
02 Mar 2022
Auto-Transfer: Learning to Route Transferrable Representations
K. Murugesan
Vijay Sadashivaiah
Ronny Luss
Karthikeyan Shanmugam
Pin-Yu Chen
Amit Dhurandhar
AAML
33
5
0
02 Feb 2022
When Do Flat Minima Optimizers Work?
Jean Kaddour
Linqing Liu
Ricardo M. A. Silva
Matt J. Kusner
ODL
11
58
0
01 Feb 2022
Deconfounded Representation Similarity for Comparison of Neural Networks
Tianyu Cui
Yogesh Kumar
Pekka Marttinen
Samuel Kaski
CML
24
13
0
31 Jan 2022
An Educated Warm Start For Deep Image Prior-Based Micro CT Reconstruction
Riccardo Barbano
Johannes Leuschner
Maximilian Schmidt
Alexander Denker
A. Hauptmann
Peter Maass
Bangti Jin
29
19
0
23 Nov 2021
AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader Impact Statements
Carolyn Ashurst
Emmie Hine
Paul Sedille
A. Carlier
24
29
0
02 Nov 2021
No One Representation to Rule Them All: Overlapping Features of Training Methods
Raphael Gontijo-Lopes
Yann N. Dauphin
E. D. Cubuk
18
60
0
20 Oct 2021
Behavioral Experiments for Understanding Catastrophic Forgetting
Samuel J. Bell
Neil D. Lawrence
12
4
0
20 Oct 2021
Representational Continuity for Unsupervised Continual Learning
Divyam Madaan
Jaehong Yoon
Yuanchun Li
Yunxin Liu
Sung Ju Hwang
CLL
SSL
62
111
0
13 Oct 2021
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
R. Entezari
Hanie Sedghi
O. Saukh
Behnam Neyshabur
MoMe
35
215
0
12 Oct 2021
Exploring the Limits of Large Scale Pre-training
Samira Abnar
Mostafa Dehghani
Behnam Neyshabur
Hanie Sedghi
AI4CE
55
114
0
05 Oct 2021
Robust fine-tuning of zero-shot models
Mitchell Wortsman
Gabriel Ilharco
Jong Wook Kim
Mike Li
Simon Kornblith
...
Raphael Gontijo-Lopes
Hannaneh Hajishirzi
Ali Farhadi
Hongseok Namkoong
Ludwig Schmidt
VLM
23
688
0
04 Sep 2021
Knowledge accumulating: The general pattern of learning
Zhuoran Xu
Hao Liu
CLL
16
0
0
09 Aug 2021
A Theoretical Analysis of Fine-tuning with Linear Teachers
Gal Shachaf
Alon Brutzkus
Amir Globerson
26
17
0
04 Jul 2021
What can linear interpolation of neural network loss landscapes tell us?
Tiffany J. Vlaar
Jonathan Frankle
MoMe
22
27
0
30 Jun 2021
Randomness In Neural Network Training: Characterizing The Impact of Tooling
Donglin Zhuang
Xingyao Zhang
S. Song
Sara Hooker
17
75
0
22 Jun 2021
Learning Invariant Representations across Domains and Tasks
Jindong Wang
Wenjie Feng
Chang-Shu Liu
Chaohui Yu
Min Du
Renjun Xu
Tao Qin
Tie-Yan Liu
OOD
22
3
0
03 Mar 2021
LogME: Practical Assessment of Pre-trained Models for Transfer Learning
Kaichao You
Yong Liu
Jianmin Wang
Mingsheng Long
16
178
0
22 Feb 2021
BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data
Demetres Kostas
Stephane Aroca-Ouellette
Frank Rudzicz
SSL
41
202
0
28 Jan 2021
An analysis of the transfer learning of convolutional neural networks for artistic images
Nicolas Gonthier
Y. Gousseau
Saïd Ladjal
14
18
0
05 Nov 2020
Linear Mode Connectivity in Multitask and Continual Learning
Seyed Iman Mirzadeh
Mehrdad Farajtabar
Dilan Görür
Razvan Pascanu
H. Ghasemzadeh
CLL
29
137
0
09 Oct 2020
Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML
Aniruddh Raghu
M. Raghu
Samy Bengio
Oriol Vinyals
172
639
0
19 Sep 2019
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
N. Keskar
Dheevatsa Mudigere
J. Nocedal
M. Smelyanskiy
P. T. P. Tang
ODL
273
2,888
0
15 Sep 2016
Previous
1
2