216
v1v2v3 (latest)

Contrastive Perplexity for Controlled Generation: An Application in Detoxifying Large Language Models

Annual Meeting of the Association for Computational Linguistics (ACL), 2024
Main:9 Pages
6 Figures
Bibliography:5 Pages
9 Tables
Appendix:2 Pages
Abstract

The generation of toxic content by large language models (LLMs) remains a critical challenge for the safe deployment of language technology. We propose a novel framework for implicit knowledge editing and controlled text generation by fine-tuning LLMs with a prototype-based contrastive perplexity objective. Central to our method is the construction of hard negatives - toxic outputs that are generated through adversarial paraphrasing to be semantically similar and model probability to their non-toxic counterparts. By training on these challenging and realistic pairs, our approach ensures robust and stable contrastive optimization. Experimental results in the domain of detoxification demonstrate that our method significantly reduces toxic generation while maintaining strong performance on downstream tasks such as commonsense reasoning and reading comprehension. Our findings highlight the effectiveness of exploiting hard negatives for attribute-aware fine-tuning.

View on arXiv
Comments on this paper