ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.14959
  4. Cited By
Understanding the Dark Side of LLMs' Intrinsic Self-Correction

Understanding the Dark Side of LLMs' Intrinsic Self-Correction

19 December 2024
Qingjie Zhang
Han Qiu
Di Wang
Haoting Qian
Yiming Li
Tianwei Zhang
Minlie Huang
    LRM
ArXivPDFHTML

Papers citing "Understanding the Dark Side of LLMs' Intrinsic Self-Correction"

2 / 2 papers shown
Title
When Do LLMs Admit Their Mistakes? Understanding the Role of Model Belief in Retraction
When Do LLMs Admit Their Mistakes? Understanding the Role of Model Belief in Retraction
Yuqing Yang
Robin Jia
KELM
LRM
63
0
0
22 May 2025
Smaller Large Language Models Can Do Moral Self-Correction
Smaller Large Language Models Can Do Moral Self-Correction
Guangliang Liu
Zhiyu Xue
Rongrong Wang
K. Johnson
Kristen Marie Johnson
LRM
57
0
0
30 Oct 2024
1