ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.04539
  4. Cited By
Best Practices for Distilling Large Language Models into BERT for Web
  Search Ranking

Best Practices for Distilling Large Language Models into BERT for Web Search Ranking

7 November 2024
Dezhi Ye
Junwei Hu
Jiabin Fan
Bowen Tian
Jie Liu
Haijin Liang
Jin Ma
ArXivPDFHTML

Papers citing "Best Practices for Distilling Large Language Models into BERT for Web Search Ranking"

1 / 1 papers shown
Title
KETCHUP: K-Step Return Estimation for Sequential Knowledge Distillation
KETCHUP: K-Step Return Estimation for Sequential Knowledge Distillation
Jiabin Fan
Guoqing Luo
Michael Bowling
Lili Mou
OffRL
61
0
0
26 Apr 2025
1