29
0

Text2FX: Harnessing CLAP Embeddings for Text-Guided Audio Effects

Abstract

This work introduces Text2FX, a method that leverages CLAP embeddings and differentiable digital signal processing to control audio effects, such as equalization and reverberation, using open-vocabulary natural language prompts (e.g., "make this sound in-your-face and bold"). Text2FX operates without retraining any models, relying instead on single-instance optimization within the existing embedding space, thus enabling a flexible, scalable approach to open-vocabulary sound transformations through interpretable and disentangled FX manipulation. We show that CLAP encodes valuable information for controlling audio effects and propose two optimization approaches using CLAP to map text to audio effect parameters. While we demonstrate with CLAP, this approach is applicable to any shared text-audio embedding space. Similarly, while we demonstrate with equalization and reverberation, any differentiable audio effect may be controlled. We conduct a listener study with diverse text prompts and source audio to evaluate the quality and alignment of these methods with human perception. Demos and code are available atthis http URL.

View on arXiv
@article{chu2025_2409.18847,
  title={ Text2FX: Harnessing CLAP Embeddings for Text-Guided Audio Effects },
  author={ Annie Chu and Patrick O'Reilly and Julia Barnett and Bryan Pardo },
  journal={arXiv preprint arXiv:2409.18847},
  year={ 2025 }
}
Comments on this paper