ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.11353
  4. Cited By
Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss
  Policy for Transfer Learning

Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning

21 December 2022
Christopher T. Lengerich
Gabriel Synnaeve
Amy Zhang
Hugh Leather
Kurt Shuster
Franccois Charton
Charysse Redwood
    SSLOffRL
ArXiv (abs)PDFHTML

Papers citing "Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss Policy for Transfer Learning"

3 / 3 papers shown
Title
Augmented Language Models: a Survey
Augmented Language Models: a Survey
Grégoire Mialon
Roberto Dessì
Maria Lomeli
Christoforos Nalmpantis
Ramakanth Pasunuru
...
Jane Dwivedi-Yu
Asli Celikyilmaz
Edouard Grave
Yann LeCun
Thomas Scialom
LRMKELM
226
476
0
15 Feb 2023
Contrastive Representation Distillation
Contrastive Representation DistillationInternational Conference on Learning Representations (ICLR), 2019
Yonglong Tian
Dilip Krishnan
Phillip Isola
1.0K
1,195
0
23 Oct 2019
Billion-scale similarity search with GPUs
Billion-scale similarity search with GPUsIEEE Transactions on Big Data (TBD), 2017
Jeff Johnson
Matthijs Douze
Edouard Grave
781
4,389
0
28 Feb 2017
1