ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.05472
  4. Cited By
Text Representation Distillation via Information Bottleneck Principle

Text Representation Distillation via Information Bottleneck Principle

9 November 2023
Yanzhao Zhang
Dingkun Long
Zehan Li
Pengjun Xie
ArXivPDFHTML

Papers citing "Text Representation Distillation via Information Bottleneck Principle"

2 / 2 papers shown
Title
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked
  Auto-Encoder
RetroMAE: Pre-Training Retrieval-oriented Language Models Via Masked Auto-Encoder
Shitao Xiao
Zheng Liu
Yingxia Shao
Zhao Cao
RALM
118
109
0
24 May 2022
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage
  Retrieval
Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval
Luyu Gao
Jamie Callan
RALM
159
329
0
12 Aug 2021
1