ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09031
  4. Cited By
Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification

Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification

13 May 2025
Adarsh Kumar
Hwiyoon Kim
Jawahar Sai Nathani
Neil Roy
    HILMLRM
ArXiv (abs)PDFHTML

Papers citing "Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification"

2 / 2 papers shown
Title
Scaling Open-Weight Large Language Models for Hydropower Regulatory Information Extraction: A Systematic Analysis
Scaling Open-Weight Large Language Models for Hydropower Regulatory Information Extraction: A Systematic Analysis
Hong-Jun Yoon
Faisal Ashraf
Thomas A. Ruggles
Debjani Singh
ELM
84
0
0
14 Nov 2025
CoT-RAG: Integrating Chain of Thought and Retrieval-Augmented Generation to Enhance Reasoning in Large Language Models
CoT-RAG: Integrating Chain of Thought and Retrieval-Augmented Generation to Enhance Reasoning in Large Language Models
Feiyang Li
Peng Fang
Zhan Shi
Arijit Khan
Fang Wang
Weihao Wang
Xin Zhang
Xin Zhang
ReLMLRM
370
4
0
18 Apr 2025
1