ProteinGPT: Multimodal LLM for Protein Property Prediction and Structure Understanding

Understanding biological processes, drug development, and biotechnological advancements requires a detailed analysis of protein structures and functions, a task that is inherently complex and time-consuming in traditional protein research. To streamline this process, we introduce ProteinGPT, a state-of-the-art multimodal large language model for proteins that enables users to upload protein sequences and/or structures for comprehensive analysis and responsive inquiries. ProteinGPT integrates protein sequence and structure encoders with linear projection layers to ensure precise representation adaptation and leverages a large language model (LLM) to generate accurate, contextually relevant responses. To train ProteinGPT, we constructed a large-scale dataset of 132,092 proteins, each annotated with 20-30 property tags and 5-10 QA pairs per protein, and optimized the instruction-tuning process using GPT-4o. Experiments demonstrate that ProteinGPT effectively generates informative responses to protein-related questions, achieving high performance on both semantic and lexical metrics and significantly outperforming baseline models and general-purpose LLMs in understanding and responding to protein-related queries. Our code and data are available atthis https URL.
View on arXiv@article{xiao2025_2408.11363, title={ ProteinGPT: Multimodal LLM for Protein Property Prediction and Structure Understanding }, author={ Yijia Xiao and Edward Sun and Yiqiao Jin and Qifan Wang and Wei Wang }, journal={arXiv preprint arXiv:2408.11363}, year={ 2025 } }