ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15255
  4. Cited By
MentalMAC: Enhancing Large Language Models for Detecting Mental Manipulation via Multi-Task Anti-Curriculum Distillation

MentalMAC: Enhancing Large Language Models for Detecting Mental Manipulation via Multi-Task Anti-Curriculum Distillation

21 May 2025
Yuansheng Gao
Han Bao
Tong Zhang
Bin Li
Zonghui Wang
Wenzhi Chen
ArXivPDFHTML

Papers citing "MentalMAC: Enhancing Large Language Models for Detecting Mental Manipulation via Multi-Task Anti-Curriculum Distillation"

11 / 11 papers shown
Title
Task-Informed Anti-Curriculum by Masking Improves Downstream Performance on Text
Task-Informed Anti-Curriculum by Masking Improves Downstream Performance on Text
Andrei Jarca
Florinel-Alin Croitoru
Radu Tudor Ionescu
82
1
0
18 Feb 2025
Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Detecting Conversational Mental Manipulation with Intent-Aware Prompting
Jiayuan Ma
Hongbin Na
Zehua Wang
Yining Hua
Yue Liu
Wei Wang
Ling-Hao Chen
92
5
0
11 Dec 2024
Reversal of Thought: Enhancing Large Language Models with Preference-Guided Reverse Reasoning Warm-up
Reversal of Thought: Enhancing Large Language Models with Preference-Guided Reverse Reasoning Warm-up
Jiahao Yuan
Dehui Du
Hao Zhang
Zixiang Di
Usman Naseem
LRM
51
5
0
16 Oct 2024
Enhanced Detection of Conversational Mental Manipulation Through
  Advanced Prompting Techniques
Enhanced Detection of Conversational Mental Manipulation Through Advanced Prompting Techniques
Ivory Yang
Xiaobo Guo
Sean Xie
Soroush Vosoughi
67
7
0
14 Aug 2024
MentalManip: A Dataset For Fine-grained Analysis of Mental Manipulation
  in Conversations
MentalManip: A Dataset For Fine-grained Analysis of Mental Manipulation in Conversations
Yuxin Wang
Ivory Yang
Saeed Hassanpour
Soroush Vosoughi
AAML
55
11
0
26 May 2024
Distilling Step-by-Step! Outperforming Larger Language Models with Less
  Training Data and Smaller Model Sizes
Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes
Lokesh Nagalapatti
Chun-Liang Li
Chih-Kuan Yeh
Hootan Nakhost
Yasuhisa Fujii
Alexander Ratner
Ranjay Krishna
Chen-Yu Lee
Tomas Pfister
ALM
254
526
0
03 May 2023
CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
CHAMPAGNE: Learning Real-world Conversation from Large-Scale Web Videos
Seungju Han
Jack Hessel
Nouha Dziri
Yejin Choi
Youngjae Yu
VGen
45
18
0
17 Mar 2023
GPT-4 Technical Report
GPT-4 Technical Report
OpenAI OpenAI
OpenAI Josh Achiam
Steven Adler
Sandhini Agarwal
Lama Ahmad
...
Shengjia Zhao
Tianhao Zheng
Juntang Zhuang
William Zhuk
Barret Zoph
LLMAG
MLLM
631
13,788
0
15 Mar 2023
Toxicity Detection with Generative Prompt-based Inference
Toxicity Detection with Generative Prompt-based Inference
Yau-Shian Wang
Y. Chang
117
37
0
24 May 2022
When Do Curricula Work?
When Do Curricula Work?
Xiaoxia Wu
Ethan Dyer
Behnam Neyshabur
47
115
0
05 Dec 2020
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
241
19,523
0
09 Mar 2015
1