ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.07385
33
0

TALE: A Tool-Augmented Framework for Reference-Free Evaluation of Large Language Models

10 April 2025
Sher Badshah
Ali Emami
Hassan Sajjad
    LLMAG
    ELM
ArXivPDFHTML
Abstract

As Large Language Models (LLMs) become increasingly integrated into real-world, autonomous applications, relying on static, pre-annotated references for evaluation poses significant challenges in cost, scalability, and completeness. We propose Tool-Augmented LLM Evaluation (TALE), a framework to assess LLM outputs without predetermined ground-truth answers. Unlike conventional metrics that compare to fixed references or depend solely on LLM-as-a-judge knowledge, TALE employs an agent with tool-access capabilities that actively retrieves and synthesizes external evidence. It iteratively generates web queries, collects information, summarizes findings, and refines subsequent searches through reflection. By shifting away from static references, TALE aligns with free-form question-answering tasks common in real-world scenarios. Experimental results on multiple free-form QA benchmarks show that TALE not only outperforms standard reference-based metrics for measuring response accuracy but also achieves substantial to near-perfect agreement with human evaluations. TALE enhances the reliability of LLM evaluations in real-world, dynamic scenarios without relying on static references.

View on arXiv
@article{badshah2025_2504.07385,
  title={ TALE: A Tool-Augmented Framework for Reference-Free Evaluation of Large Language Models },
  author={ Sher Badshah and Ali Emami and Hassan Sajjad },
  journal={arXiv preprint arXiv:2504.07385},
  year={ 2025 }
}
Comments on this paper