ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.06424
16
0

Beyond Toxic Neurons: A Mechanistic Analysis of DPO for Toxicity Reduction

10 November 2024
Yushi Yang
Filip Sondej
Harry Mayne
Adam Mahdi
ArXivPDFHTML
Abstract

Safety fine-tuning algorithms are widely used to reduce harmful outputs in language models, but how they achieve this remain unclear. Studying the Direct Preference Optimization (DPO) algorithm for toxicity reduction, current explanations claim that DPO achieves this by dampening the activations of toxic MLP neurons. However, through activation patching, we show that this explanation is incomplete. Projections onto a toxicity probe's direction show that only 4.9% of toxicity reduction comes from dampened toxic neurons. Instead, DPO reduces toxicity through distributed activation shifts across a majority of neurons, progressively shifting MLP layer outputs away from toxicity. These shifts accumulate across four neuron groups: two reducing toxicity and two promoting anti-toxicity. Activation patching validates the cumulative roles of these groups, where patching all identified groups effectively replicates DPO's effects. These findings illustrate DPO's mechanism: it reduces toxicity by accumulating small activation shifts across many neurons throughout the layers. Our findings provide new mechanistic insights into how safety fine-tuning reduces harmful outputs in language models.

View on arXiv
Comments on this paper