ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.06834
65
0

A Unified Knowledge-Distillation and Semi-Supervised Learning Framework to Improve Industrial Ads Delivery Systems

5 February 2025
Hamid Eghbalzadeh
Yang Wang
Rui Li
Yuji Mo
Qin Ding
Jiaxiang Fu
Liang Dai
Shuo Gu
Nima Noorshams
Sem Park
Bo Long
Xue Feng
ArXivPDFHTML
Abstract

Industrial ads ranking systems conventionally rely on labeled impression data, which leads to challenges such as overfitting, slower incremental gain from model scaling, and biases due to discrepancies between training and serving data. To overcome these issues, we propose a Unified framework for Knowledge-Distillation and Semi-supervised Learning (UKDSL) for ads ranking, empowering the training of models on a significantly larger and more diverse datasets, thereby reducing overfitting and mitigating training-serving data discrepancies. We provide detailed formal analysis and numerical simulations on the inherent miscalibration and prediction bias of multi-stage ranking systems, and show empirical evidence of the proposed framework's capability to mitigate those. Compared to prior work, UKDSL can enable models to learn from a much larger set of unlabeled data, hence, improving the performance while being computationally efficient. Finally, we report the successful deployment of UKDSL in an industrial setting across various ranking models, serving users at multi-billion scale, across various surfaces, geological locations, clients, and optimize for various events, which to the best of our knowledge is the first of its kind in terms of the scale and efficiency at which it operates.

View on arXiv
@article{eghbalzadeh2025_2502.06834,
  title={ A Unified Knowledge-Distillation and Semi-Supervised Learning Framework to Improve Industrial Ads Delivery Systems },
  author={ Hamid Eghbalzadeh and Yang Wang and Rui Li and Yuji Mo and Qin Ding and Jiaxiang Fu and Liang Dai and Shuo Gu and Nima Noorshams and Sem Park and Bo Long and Xue Feng },
  journal={arXiv preprint arXiv:2502.06834},
  year={ 2025 }
}
Comments on this paper