ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.15194
  4. Cited By
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion
  Models

DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models

24 May 2023
Sungnyun Kim
Junsoo Lee
Kibeom Hong
Daesik Kim
Namhyuk Ahn
    DiffM
ArXivPDFHTML

Papers citing "DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models"

18 / 18 papers shown
Title
Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models
Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models
Namhyuk Ahn
Kiyoon Yoo
Wonhyuk Ahn
Daesik Kim
Seung-Hun Nam
AAML
WIGM
DiffM
82
0
0
16 Dec 2024
AnyControl: Create Your Artwork with Versatile Control on Text-to-Image
  Generation
AnyControl: Create Your Artwork with Versatile Control on Text-to-Image Generation
Yanan Sun
Yanchen Liu
Yinhao Tang
Wenjie Pei
Kai Chen
DiffM
21
8
0
27 Jun 2024
ControlVAR: Exploring Controllable Visual Autoregressive Modeling
ControlVAR: Exploring Controllable Visual Autoregressive Modeling
Xiang Li
Kai Qiu
Hao Chen
Jason Kuen
Zhe-nan Lin
Rita Singh
Bhiksha Raj
DiffM
40
21
0
14 Jun 2024
SketchDeco: Decorating B&W Sketches with Colour
SketchDeco: Decorating B&W Sketches with Colour
Chaitat Utintu
Pinaki Nath Chowdhury
Aneeshan Sain
Subhadeep Koley
A. Bhunia
Yi-Zhe Song
DiffM
27
3
0
29 May 2024
LTOS: Layout-controllable Text-Object Synthesis via Adaptive
  Cross-attention Fusions
LTOS: Layout-controllable Text-Object Synthesis via Adaptive Cross-attention Fusions
Xiaoran Zhao
Tianhao Wu
Yu Lai
Zhiliang Tian
Zhen Huang
Yahui Liu
Zejiang He
Dongsheng Li
DiffM
31
1
0
21 Apr 2024
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse
  Controls to Any Diffusion Model
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
Han Lin
Jaemin Cho
Abhaysinh Zala
Mohit Bansal
DiffM
VGen
61
20
0
15 Apr 2024
Imperceptible Protection against Style Imitation from Diffusion Models
Imperceptible Protection against Style Imitation from Diffusion Models
Namhyuk Ahn
Wonhyuk Ahn
Kiyoon Yoo
Daesik Kim
Seung-Hun Nam
WIGM
AAML
DiffM
41
5
0
28 Mar 2024
Controllable Generation with Text-to-Image Diffusion Models: A Survey
Controllable Generation with Text-to-Image Diffusion Models: A Survey
Pu Cao
Feng Zhou
Qing-Huang Song
Lu Yang
67
35
0
07 Mar 2024
BootPIG: Bootstrapping Zero-shot Personalized Image Generation
  Capabilities in Pretrained Diffusion Models
BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models
Senthil Purushwalkam
Akash Gokul
Shafiq R. Joty
Nikhil Naik
DiffM
29
16
0
25 Jan 2024
FineControlNet: Fine-level Text Control for Image Generation with
  Spatially Aligned Text Control Injection
FineControlNet: Fine-level Text Control for Image Generation with Spatially Aligned Text Control Injection
Hongsuk Choi
Isaac Kasahara
Selim Engin
Moritz Graule
Nikhil Chavan-Dafle
Volkan Isler
DiffM
16
3
0
14 Dec 2023
Muse: Text-To-Image Generation via Masked Generative Transformers
Muse: Text-To-Image Generation via Masked Generative Transformers
Huiwen Chang
Han Zhang
Jarred Barber
AJ Maschinot
José Lezama
...
Kevin Patrick Murphy
William T. Freeman
Michael Rubinstein
Yuanzhen Li
Dilip Krishnan
DiffM
197
517
0
02 Jan 2023
UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal
  Guidance
UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance
Wei Li
Xue Xu
Xinyan Xiao
Jiacheng Liu
Hu Yang
...
Zhanpeng Wang
Zhifan Feng
Qiaoqiao She
Yajuan Lyu
Hua-Hong Wu
110
29
0
28 Oct 2022
Diffusion-based Image Translation using Disentangled Style and Content
  Representation
Diffusion-based Image Translation using Disentangled Style and Content Representation
Gihyun Kwon
Jong Chul Ye
DiffM
147
154
0
30 Sep 2022
Pretraining is All You Need for Image-to-Image Translation
Pretraining is All You Need for Image-to-Image Translation
Tengfei Wang
Ting Zhang
Bo Zhang
Hao Ouyang
Dong Chen
Qifeng Chen
Fang Wen
DiffM
184
177
0
25 May 2022
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
RePaint: Inpainting using Denoising Diffusion Probabilistic Models
Andreas Lugmayr
Martin Danelljan
Andrés Romero
F. I. F. Richard Yu
Radu Timofte
Luc Van Gool
DiffM
211
1,353
0
24 Jan 2022
Palette: Image-to-Image Diffusion Models
Palette: Image-to-Image Diffusion Models
Chitwan Saharia
William Chan
Huiwen Chang
Chris A. Lee
Jonathan Ho
Tim Salimans
David J. Fleet
Mohammad Norouzi
DiffM
VLM
325
1,584
0
10 Nov 2021
A Style-Based Generator Architecture for Generative Adversarial Networks
A Style-Based Generator Architecture for Generative Adversarial Networks
Tero Karras
S. Laine
Timo Aila
262
10,320
0
12 Dec 2018
Image Generation from Scene Graphs
Image Generation from Scene Graphs
Justin Johnson
Agrim Gupta
Li Fei-Fei
GNN
221
812
0
04 Apr 2018
1