ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.17064
  4. Cited By
Continuous, Subject-Specific Attribute Control in T2I Models by Identifying Semantic Directions

Continuous, Subject-Specific Attribute Control in T2I Models by Identifying Semantic Directions

25 March 2024
S. A. Baumann
Felix Krause
Michael Neumayr
Nick Stracke
Vincent Tao Hu
Bjorn Ommer
Björn Ommer
    DiffM
    LM&Ro
ArXivPDFHTML

Papers citing "Continuous, Subject-Specific Attribute Control in T2I Models by Identifying Semantic Directions"

10 / 10 papers shown
Title
"I Know It When I See It": Mood Spaces for Connecting and Expressing Visual Concepts
"I Know It When I See It": Mood Spaces for Connecting and Expressing Visual Concepts
Huzheng Yang
Katherine Xu
Michael D. Grossberg
Yutong Bai
Jianbo Shi
26
0
0
21 Apr 2025
Att-Adapter: A Robust and Precise Domain-Specific Multi-Attributes T2I Diffusion Adapter via Conditional Variational Autoencoder
Att-Adapter: A Robust and Precise Domain-Specific Multi-Attributes T2I Diffusion Adapter via Conditional Variational Autoencoder
Wonwoong Cho
Yan-Ying Chen
M. Klenk
David I. Inouye
Yanxia Zhang
DiffM
66
0
0
15 Mar 2025
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse
  Autoencoders
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders
Viacheslav Surkov
Chris Wendler
Mikhail Terekhov
Justin Deschenaux
Robert West
Çağlar Gülçehre
VLM
38
13
0
28 Oct 2024
SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation
SAFREE: Training-Free and Adaptive Guard for Safe Text-to-Image And Video Generation
Jaehong Yoon
Shoubin Yu
Vaidehi Patil
Huaxiu Yao
Mohit Bansal
59
14
0
16 Oct 2024
Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We
  Learn How Vision-Language Models Function
Magnet: We Never Know How Text-to-Image Diffusion Models Work, Until We Learn How Vision-Language Models Function
Chenyi Zhuang
Ying Hu
Pan Gao
DiffM
VLM
31
12
0
30 Sep 2024
Interpreting the Weight Space of Customized Diffusion Models
Interpreting the Weight Space of Customized Diffusion Models
Amil Dravid
Yossi Gandelsman
Kuan-Chieh Jackson Wang
Rameen Abdal
Gordon Wetzstein
Alexei A. Efros
Kfir Aberman
31
9
0
13 Jun 2024
Get What You Want, Not What You Don't: Image Content Suppression for
  Text-to-Image Diffusion Models
Get What You Want, Not What You Don't: Image Content Suppression for Text-to-Image Diffusion Models
Senmao Li
J. Weijer
Taihang Hu
Fahad Shahbaz Khan
Qibin Hou
Yaxing Wang
Jian Yang
DiffM
34
27
0
08 Feb 2024
Discovery and Expansion of New Domains within Diffusion Models
Discovery and Expansion of New Domains within Diffusion Models
Ye Zhu
Yu Wu
Duo Xu
Zhiwei Deng
Yan Yan
Olga Russakovsky
DiffM
17
1
0
13 Oct 2023
Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing
  Else
Multi-Concept T2I-Zero: Tweaking Only The Text Embeddings and Nothing Else
Hazarapet Tunanyan
Dejia Xu
Shant Navasardyan
Zhangyang Wang
Humphrey Shi
DiffM
69
7
0
11 Oct 2023
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
262
10,183
0
12 Dec 2018
1