9
0

DMRL: Data- and Model-aware Reward Learning for Data Extraction

Abstract

Large language models (LLMs) are inherently vulnerable to unintended privacy breaches. Consequently, systematic red-teaming research is essential for developing robust defense mechanisms. However, current data extraction methods suffer from several limitations: (1) rely on dataset duplicates (addressable via deduplication), (2) depend on prompt engineering (now countered by detection and defense), and (3) rely on random-search adversarial generation. To address these challenges, we propose DMRL, a Data- and Model-aware Reward Learning approach for data extraction. This technique leverages inverse reinforcement learning to extract sensitive data from LLMs. Our method consists of two main components: (1) constructing an introspective reasoning dataset that captures leakage mindsets to guide model behavior, and (2) training reward models with Group Relative Policy Optimization (GRPO), dynamically tuning optimization based on task difficulty at both the data and model levels. Comprehensive experiments across various LLMs demonstrate that DMRL outperforms all baseline methods in data extraction performance.

View on arXiv
@article{wang2025_2505.06284,
  title={ DMRL: Data- and Model-aware Reward Learning for Data Extraction },
  author={ Zhiqiang Wang and Ruoxi Cheng },
  journal={arXiv preprint arXiv:2505.06284},
  year={ 2025 }
}
Comments on this paper