43
6

Tur[k]ingBench: A Challenge Benchmark for Web Agents

Kevin Xu
Yeganeh Kordi
Kate Sanders
Yizhong Wang
Adam Byerly
Kate Sanders
Adam Byerly
Jingyu Zhang
Benjamin Van Durme
Daniel Khashabi
Abstract

Can advanced multi-modal models effectively tackle complex web-based tasks? Such tasks are often found on crowdsourcing platforms, where crowdworkers engage in challenging micro-tasks within web-based environments.Building on this idea, we present TurkingBench, a benchmark consisting of tasks presented as web pages with textual instructions and multi-modal contexts. Unlike previous approaches that rely on artificially synthesized web pages, our benchmark uses natural HTML pages originally designed for crowdsourcing workers to perform various annotation tasks. Each task's HTML instructions are instantiated with different values derived from crowdsourcing tasks, creating diverse instances. This benchmark includes 32.2K instances spread across 158 tasks.To support the evaluation of TurkingBench, we have developed a framework that links chatbot responses to actions on web pages (e.g., modifying a text box, selecting a radio button). We assess the performance of cutting-edge private and open-source models, including language-only and vision-language models (such as GPT4 and InternVL), on this benchmark. Our results show that while these models outperform random chance, there is still significant room for improvement. We hope that this benchmark will drive progress in the evaluation and development of web-based agents.

View on arXiv
@article{xu2025_2403.11905,
  title={ Tur[k]ingBench: A Challenge Benchmark for Web Agents },
  author={ Kevin Xu and Yeganeh Kordi and Tanay Nayak and Adi Asija and Yizhong Wang and Kate Sanders and Adam Byerly and Jingyu Zhang and Benjamin Van Durme and Daniel Khashabi },
  journal={arXiv preprint arXiv:2403.11905},
  year={ 2025 }
}
Comments on this paper