ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.04013
35
0

Benchmarking Large Language Models on Multiple Tasks in Bioinformatics NLP with Prompting

6 March 2025
Jiyue Jiang
Pengan Chen
J. T. Wang
Dongchen He
Ziqin Wei
Liang Hong
Licheng Zong
Sheng Wang
Qinze Yu
Zixian Ma
Y. Chen
Yimin Fan
Xiangyu Shi
J. Sun
Chuan Wu
Y. Li
    LM&MA
ArXivPDFHTML
Abstract

Large language models (LLMs) have become important tools in solving biological problems, offering improvements in accuracy and adaptability over conventional methods. Several benchmarks have been proposed to evaluate the performance of these LLMs. However, current benchmarks can hardly evaluate the performance of these models across diverse tasks effectively. In this paper, we introduce a comprehensive prompting-based benchmarking framework, termed Bio-benchmark, which includes 30 key bioinformatics tasks covering areas such as proteins, RNA, drugs, electronic health records, and traditional Chinese medicine. Using this benchmark, we evaluate six mainstream LLMs, including GPT-4o and Llama-3.1-70b, etc., using 0-shot and few-shot Chain-of-Thought (CoT) settings without fine-tuning to reveal their intrinsic capabilities. To improve the efficiency of our evaluations, we demonstrate BioFinder, a new tool for extracting answers from LLM responses, which increases extraction accuracy by round 30% compared to existing methods. Our benchmark results show the biological tasks suitable for current LLMs and identify specific areas requiring enhancement. Furthermore, we propose targeted prompt engineering strategies for optimizing LLM performance in these contexts. Based on these findings, we provide recommendations for the development of more robust LLMs tailored for various biological applications. This work offers a comprehensive evaluation framework and robust tools to support the application of LLMs in bioinformatics.

View on arXiv
@article{jiang2025_2503.04013,
  title={ Benchmarking Large Language Models on Multiple Tasks in Bioinformatics NLP with Prompting },
  author={ Jiyue Jiang and Pengan Chen and Jiuming Wang and Dongchen He and Ziqin Wei and Liang Hong and Licheng Zong and Sheng Wang and Qinze Yu and Zixian Ma and Yanyu Chen and Yimin Fan and Xiangyu Shi and Jiawei Sun and Chuan Wu and Yu Li },
  journal={arXiv preprint arXiv:2503.04013},
  year={ 2025 }
}
Comments on this paper