34
0

AdTEC: A Unified Benchmark for Evaluating Text Quality in Search Engine Advertising

Abstract

With the increase in the fluency of ad texts automatically created by natural language generation technology, there is high demand to verify the quality of these creatives in a real-world setting. We propose AdTEC (Ad Text Evaluation Benchmark by CyberAgent), the first public benchmark to evaluate ad texts from multiple perspectives within practical advertising operations. Our contributions are as follows: (i) Defining five tasks for evaluating the quality of ad texts, as well as building a Japanese dataset based on the practical operational experiences of building a Japanese dataset based on the practical operational experiences of advertising agencies, which are typically kept in-house. (ii) Validating the performance of existing pre-trained language models (PLMs) and human evaluators on the dataset. (iii) Analyzing the characteristics and providing challenges of the benchmark. The results show that while PLMs have already reached practical usage level in several tasks, humans still outperform in certain domains, implying that there is significant room for improvement in this area.

View on arXiv
@article{zhang2025_2408.05906,
  title={ AdTEC: A Unified Benchmark for Evaluating Text Quality in Search Engine Advertising },
  author={ Peinan Zhang and Yusuke Sakai and Masato Mita and Hiroki Ouchi and Taro Watanabe },
  journal={arXiv preprint arXiv:2408.05906},
  year={ 2025 }
}
Comments on this paper