Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy

Malicious users attempt to replicate commercial models functionally at low cost by training a clone model with query responses. It is challenging to timely prevent such model-stealing attacks to achieve strong protection and maintain utility. In this paper, we propose a novel non-parametric detector called Account-aware Distribution Discrepancy (ADD) to recognize queries from malicious users by leveraging account-wise local dependency. We formulate each class as a Multivariate Normal distribution (MVN) in the feature space and measure the malicious score as the sum of weighted class-wise distribution discrepancy. The ADD detector is combined with random-based prediction poisoning to yield a plug-and-play defense module named D-ADD for image classification models. Results of extensive experimental studies show that D-ADD achieves strong defense against different types of attacks with little interference in serving benign users for both soft and hard-label settings.
View on arXiv@article{mei2025_2503.12497, title={ Defense Against Model Stealing Based on Account-Aware Distribution Discrepancy }, author={ Jian-Ping Mei and Weibin Zhang and Jie Chen and Xuyun Zhang and Tiantian Zhu }, journal={arXiv preprint arXiv:2503.12497}, year={ 2025 } }