Modern industrial advertising systems commonly employ Multi-stage Cascading Architectures (MCA) to balance computational efficiency with ranking accuracy. However, this approach presents two fundamental challenges: (1) performance inconsistencies arising from divergent optimization targets and capability differences between stages, and (2) failure to account for advertisement externalities - the complex interactions between candidate ads during ranking. These limitations ultimately compromise system effectiveness and reduce platform profitability. In this paper, we present EGA-V1, an end-to-end generative architecture that unifies online advertising ranking as one model. EGA-V1 replaces cascaded stages with a single model to directly generate optimal ad sequences from the full candidate ad corpus in location-based services (LBS). The primary challenges associated with this approach stem from high costs of feature processing and computational bottlenecks in modeling externalities of large-scale candidate pools. To address these challenges, EGA-V1 introduces an algorithm and engine co-designed hybrid feature service to decouple user and ad feature processing, reducing latency while preserving expressiveness. To efficiently extract intra- and cross-sequence mutual information, we propose RecFormer with an innovative cluster-attention mechanism as its core architectural component. Furthermore, we propose a bi-stage training strategy that integrates pre-training with reinforcement learning-based post-training to meet sophisticated platform and advertising objectives. Extensive offline evaluations on public benchmarks and large-scale online A/B testing on industrial advertising platform have demonstrated the superior performance of EGA-V1 over state-of-the-art MCAs.
View on arXiv@article{qiu2025_2505.19755, title={ EGA-V1: Unifying Online Advertising with End-to-End Learning }, author={ Junyan Qiu and Ze Wang and Fan Zhang and Zuowu Zheng and Jile Zhu and Jiangke Fan and Teng Zhang and Haitao Wang and Yongkang Wang and Xingxing Wang }, journal={arXiv preprint arXiv:2505.19755}, year={ 2025 } }