ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2511.06346
208
0

LPFQA: A Long-Tail Professional Forum-based Benchmark for LLM Evaluation

9 November 2025
Liya Zhu
Peizhuang Cong
Aowei Ji
Wenya Wu
Jiani Hou
Chunjie Wu
Xiang Gao
Jingkai Liu
Zhou Huan
Xuelei Sun
Y. Yang
Jianpeng Jiao
Liang Hu
Xinjie Chen
Jiashuo Liu
Jingzhe Ding
Tong Yang
Z. Wang
Ge Zhang
Wenhao Huang
    ALMELM
ArXiv (abs)PDFHTML
Main:2 Pages
2 Figures
3 Tables
Appendix:18 Pages
Abstract

Large Language Models (LLMs) have made rapid progress in reasoning, question answering, and professional applications; however, their true capabilities remain difficult to evaluate using existing benchmarks. Current datasets often focus on simplified tasks or artificial scenarios, overlooking long-tail knowledge and the complexities of real-world applications. To bridge this gap, we propose LPFQA, a long-tail knowledge-based benchmark derived from authentic professional forums across 20 academic and industrial fields, covering 502 tasks grounded in practical expertise. LPFQA introduces four key innovations: fine-grained evaluation dimensions that target knowledge depth, reasoning, terminology comprehension, and contextual analysis; a hierarchical difficulty structure that ensures semantic clarity and unique answers; authentic professional scenario modeling with realistic user personas; and interdisciplinary knowledge integration across diverse domains. We evaluated 12 mainstream LLMs on LPFQA and observed significant performance disparities, especially in specialized reasoning tasks. LPFQA provides a robust, authentic, and discriminative benchmark for advancing LLM evaluation and guiding future model development.

View on arXiv
Comments on this paper