ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19209
  4. Cited By
MOOSE-Chem2: Exploring LLM Limits in Fine-Grained Scientific Hypothesis Discovery via Hierarchical Search
v1v2 (latest)

MOOSE-Chem2: Exploring LLM Limits in Fine-Grained Scientific Hypothesis Discovery via Hierarchical Search

25 May 2025
Zonglin Yang
Wanhao Liu
Ben Gao
Y. Liu
Wei-Hong Li
Tong Xie
Lidong Bing
Xuming He
Erik Cambria
Dongzhan Zhou
ArXiv (abs)PDFHTMLHuggingFace (25 upvotes)Github (3★)

Papers citing "MOOSE-Chem2: Exploring LLM Limits in Fine-Grained Scientific Hypothesis Discovery via Hierarchical Search"

10 / 10 papers shown
ResearchGPT: Benchmarking and Training LLMs for End-to-End Computer Science Research Workflows
ResearchGPT: Benchmarking and Training LLMs for End-to-End Computer Science Research Workflows
Penghao Wang
Yuhao Zhou
Mengxuan Wu
Ziheng Qin
Bangyuan Zhu
...
J. Yang
Zheng Zhu
Tianlong Chen
Zinan Lin
Kai Wang
LLMAGAI4TSALMVLM
331
0
0
23 Oct 2025
ResearchBench: Benchmarking LLMs in Scientific Discovery via Inspiration-Based Task Decomposition
ResearchBench: Benchmarking LLMs in Scientific Discovery via Inspiration-Based Task Decomposition
Yong Liu
Zonglin Yang
Tong Xie
Jinjie Ni
Ben Gao
Rui Wang
Weizhen He
Wanli Ouyang
Xiaoshi Zhong
Dongzhan Zhou
393
24
0
27 Mar 2025
LLM4SR: A Survey on Large Language Models for Scientific Research
Ziming Luo
Zonglin Yang
Zexin Xu
Wei Yang
Xinya Du
196
54
0
08 Jan 2025
Nova: An Iterative Planning and Search Approach to Enhance Novelty and
  Diversity of LLM Generated Ideas
Nova: An Iterative Planning and Search Approach to Enhance Novelty and Diversity of LLM Generated Ideas
Xiang Hu
Hongyu Fu
Jinge Wang
Yifeng Wang
Zhikun Li
Renjun Xu
Yu Lu
Yaochu Jin
Lili Pan
Zhenzhong Lan
LRM
221
40
0
18 Oct 2024
MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses
MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific HypothesesInternational Conference on Learning Representations (ICLR), 2024
Zonglin Yang
Wanhao Liu
Ben Gao
Tong Xie
You Li
Wanli Ouyang
Soujanya Poria
Xiaoshi Zhong
Dongzhan Zhou
LRM
576
45
0
09 Oct 2024
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale
  Prediction
Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale PredictionNeural Information Processing Systems (NeurIPS), 2024
Keyu Tian
Yi Jiang
Zehuan Yuan
Zehuan Yuan
Liwei Wang
VGen
411
743
0
03 Apr 2024
Large Language Models are Zero Shot Hypothesis Proposers
Large Language Models are Zero Shot Hypothesis Proposers
Biqing Qi
Kaiyan Zhang
Haoxiang Li
Kai Tian
Sihang Zeng
Zhang-Ren Chen
Bowen Zhou
269
49
0
10 Nov 2023
Large Language Models for Automated Open-domain Scientific Hypotheses
  Discovery
Large Language Models for Automated Open-domain Scientific Hypotheses DiscoveryAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Zonglin Yang
Xinya Du
Junxian Li
Jie Zheng
Soujanya Poria
Xiaoshi Zhong
LRM
303
87
0
06 Sep 2023
SciMON: Scientific Inspiration Machines Optimized for Novelty
SciMON: Scientific Inspiration Machines Optimized for NoveltyAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Qingyun Wang
Doug Downey
Heng Ji
Kyle Lo
LLMAG
346
136
0
23 May 2023
Self-Consistency Improves Chain of Thought Reasoning in Language Models
Self-Consistency Improves Chain of Thought Reasoning in Language ModelsInternational Conference on Learning Representations (ICLR), 2022
Xuezhi Wang
Jason W. Wei
Dale Schuurmans
Quoc Le
Ed H. Chi
Sharan Narang
Aakanksha Chowdhery
Denny Zhou
ReLMBDLLRMAI4CE
2.7K
5,693
0
21 Mar 2022
1
Page 1 of 1