MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model
Human expertise in chemistry and biomedicine relies on contextual molecular understanding, a capability that large language models (LLMs) can extend through fine-grained alignment between molecular structures and text. Recent multimodal learning advances focus on cross-modal alignment, but existing molecule-text models ignore complementary information in different molecular views and rely on single-view representations, limiting molecular understanding. Moreover, naïve multi-view alignment strategies face two challenges: (1) separate aligned spaces with inconsistent mappings between molecule and text embeddings, and that (2) existing loss objectives fail to preserve complementary information for fine-grained alignment. This can limit the LLM's ability to fully understand the molecular properties. To address these issues, we propose MV-CLAM, a novel framework that aligns multi-view molecular representations into a unified textual space using a multi-query transformer (MQ-Former). Our approach ensures cross-view consistency while a token-level contrastive loss preserves diverse molecular features across textual queries. MV-CLAM enhances molecular reasoning, improving retrieval and captioning accuracy. The source code of MV-CLAM is available inthis https URL.
View on arXiv@article{ha2025_2503.04780, title={ MV-CLAM: Multi-View Molecular Interpretation with Cross-Modal Projection via Language Model }, author={ Sumin Ha and Jun Hyeong Kim and Yinhua Piao and Sun Kim }, journal={arXiv preprint arXiv:2503.04780}, year={ 2025 } }