ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.01505
  4. Cited By
Self-Cognition in Large Language Models: An Exploratory Study

Self-Cognition in Large Language Models: An Exploratory Study

1 July 2024
Dongping Chen
Jiawen Shi
Yao Wan
Pan Zhou
Neil Zhenqiang Gong
Lichao Sun
    LRM
    LLMAG
ArXivPDFHTML

Papers citing "Self-Cognition in Large Language Models: An Exploratory Study"

4 / 4 papers shown
Title
Jailbreaking Large Language Models Through Alignment Vulnerabilities in Out-of-Distribution Settings
Jailbreaking Large Language Models Through Alignment Vulnerabilities in Out-of-Distribution Settings
Yue Huang
Jingyu Tang
Dongping Chen
Bingda Tang
Yao Wan
Lichao Sun
Philip S. Yu
Xiangliang Zhang
AAML
28
6
0
19 Jun 2024
Large Language Models are Superpositions of All Characters: Attaining
  Arbitrary Role-play via Self-Alignment
Large Language Models are Superpositions of All Characters: Attaining Arbitrary Role-play via Self-Alignment
Keming Lu
Bowen Yu
Chang Zhou
Jingren Zhou
37
56
0
23 Jan 2024
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Shashank Gupta
Vaishnavi Shrivastava
A. Deshpande
A. Kalyan
Peter Clark
Ashish Sabharwal
Tushar Khot
120
101
0
08 Nov 2023
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
226
4,453
0
23 Jan 2020
1