ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.04816
26
0
v1v2v3 (latest)

Hacking Generative Models with Differentiable Network Bending

7 October 2023
Giacomo Aldegheri
Alina Rogalska
Ahmed Youssef
Eugenia Iofinova
ArXiv (abs)PDFHTML
Abstract

In this work, we propose a method to 'hack' generative models, pushing their outputs away from the original training distribution towards a new objective. We inject a small-scale trainable module between the intermediate layers of the model and train it for a low number of iterations, keeping the rest of the network frozen. The resulting output images display an uncanny quality, given by the tension between the original and new objectives that can be exploited for artistic purposes.

View on arXiv
Comments on this paper