ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17604
34
0

OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery

22 March 2025
Vignesh Prabhakar
Md Amirul Islam
Adam Atanas
Y. Wang
J. N. Han
Aastha Jhunjhunwala
Rucha Apte
Robert Clark
Kang Xu
Zihan Wang
Kai Liu
    LRM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have demonstrated remarkable potential in advancing scientific knowledge and addressing complex challenges. In this work, we introduce OmniScience, a specialized large reasoning model for general science, developed through three key components: (1) domain adaptive pretraining on a carefully curated corpus of scientific literature, (2) instruction tuning on a specialized dataset to guide the model in following domain-specific tasks, and (3) reasoning-based knowledge distillation through fine-tuning to significantly enhance its ability to generate contextually relevant and logically sound responses. We demonstrate the versatility of OmniScience by developing a battery agent that efficiently ranks molecules as potential electrolyte solvents or additives. Comprehensive evaluations reveal that OmniScience is competitive with state-of-the-art large reasoning models on the GPQA Diamond and domain-specific battery benchmarks, while outperforming all public reasoning and non-reasoning models with similar parameter counts. We further demonstrate via ablation experiments that domain adaptive pretraining and reasoning-based knowledge distillation are critical to attain our performance levels, across benchmarks.

View on arXiv
@article{prabhakar2025_2503.17604,
  title={ OmniScience: A Domain-Specialized LLM for Scientific Reasoning and Discovery },
  author={ Vignesh Prabhakar and Md Amirul Islam and Adam Atanas and Yao-Ting Wang and Joah Han and Aastha Jhunjhunwala and Rucha Apte and Robert Clark and Kang Xu and Zihan Wang and Kai Liu },
  journal={arXiv preprint arXiv:2503.17604},
  year={ 2025 }
}
Comments on this paper