ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.16212
21
3

Comprehensive benchmarking of large language models for RNA secondary structure prediction

21 October 2024
L. I. Zablocki
L. A. Bugnon
M. Gerard
L. Di Persia
G. Stegmayer
D. H. Milone
    AI4TS
ArXivPDFHTML
Abstract

Inspired by the success of large language models (LLM) for DNA and proteins, several LLM for RNA have been developed recently. RNA-LLM uses large datasets of RNA sequences to learn, in a self-supervised way, how to represent each RNA base with a semantically rich numerical vector. This is done under the hypothesis that obtaining high-quality RNA representations can enhance data-costly downstream tasks. Among them, predicting the secondary structure is a fundamental task for uncovering RNA functional mechanisms. In this work we present a comprehensive experimental analysis of several pre-trained RNA-LLM, comparing them for the RNA secondary structure prediction task in an unified deep learning framework. The RNA-LLM were assessed with increasing generalization difficulty on benchmark datasets. Results showed that two LLM clearly outperform the other models, and revealed significant challenges for generalization in low-homology scenarios.

View on arXiv
@article{zablocki2025_2410.16212,
  title={ Comprehensive benchmarking of large language models for RNA secondary structure prediction },
  author={ L.I. Zablocki and L.A. Bugnon and M. Gerard and L. Di Persia and G. Stegmayer and D.H. Milone },
  journal={arXiv preprint arXiv:2410.16212},
  year={ 2025 }
}
Comments on this paper