Inspired by the success of large language models (LLM) for DNA and proteins, several LLM for RNA have been developed recently. RNA-LLM uses large datasets of RNA sequences to learn, in a self-supervised way, how to represent each RNA base with a semantically rich numerical vector. This is done under the hypothesis that obtaining high-quality RNA representations can enhance data-costly downstream tasks. Among them, predicting the secondary structure is a fundamental task for uncovering RNA functional mechanisms. In this work we present a comprehensive experimental analysis of several pre-trained RNA-LLM, comparing them for the RNA secondary structure prediction task in an unified deep learning framework. The RNA-LLM were assessed with increasing generalization difficulty on benchmark datasets. Results showed that two LLM clearly outperform the other models, and revealed significant challenges for generalization in low-homology scenarios.
View on arXiv@article{zablocki2025_2410.16212, title={ Comprehensive benchmarking of large language models for RNA secondary structure prediction }, author={ L.I. Zablocki and L.A. Bugnon and M. Gerard and L. Di Persia and G. Stegmayer and D.H. Milone }, journal={arXiv preprint arXiv:2410.16212}, year={ 2025 } }