ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.13146
39
0

Antidistillation Sampling

17 April 2025
Yash Savani
Asher Trockman
Zhili Feng
Avi Schwarzschild
Alexander Robey
Marc Finzi
J. Zico Kolter
ArXivPDFHTML
Abstract

Frontier models that generate extended reasoning traces inadvertently produce rich token sequences that can facilitate model distillation. Recognizing this vulnerability, model owners may seek sampling strategies that limit the effectiveness of distillation without compromising model performance. Antidistillation sampling provides exactly this capability. By strategically modifying a model's next-token probability distribution, antidistillation sampling poisons reasoning traces, rendering them significantly less effective for distillation while preserving the model's practical utility. For further details, seethis https URL.

View on arXiv
@article{savani2025_2504.13146,
  title={ Antidistillation Sampling },
  author={ Yash Savani and Asher Trockman and Zhili Feng and Avi Schwarzschild and Alexander Robey and Marc Finzi and J. Zico Kolter },
  journal={arXiv preprint arXiv:2504.13146},
  year={ 2025 }
}
Comments on this paper