ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.06679
  4. Cited By
AltCLIP: Altering the Language Encoder in CLIP for Extended Language
  Capabilities

AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities

12 November 2022
Zhongzhi Chen
Guangyi Liu
Bo-Wen Zhang
Fulong Ye
Qinghong Yang
Ledell Yu Wu
    VLM
ArXivPDFHTML

Papers citing "AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities"

11 / 61 papers shown
Title
Mitigating Inappropriateness in Image Generation: Can there be Value in
  Reflecting the World's Ugliness?
Mitigating Inappropriateness in Image Generation: Can there be Value in Reflecting the World's Ugliness?
Manuel Brack
Felix Friedrich
P. Schramowski
Kristian Kersting
EGVM
18
13
0
28 May 2023
MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal
  Image Generation
MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
Marco Bellagente
Manuel Brack
H. Teufel
Felix Friedrich
Bjorn Deiseroth
...
Koen Oostermeijer
Andres Felipe Cruz Salinas
P. Schramowski
Kristian Kersting
Samuel Weinbach
36
15
0
24 May 2023
Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with
  Images as Pivots
Efficient Cross-Lingual Transfer for Chinese Stable Diffusion with Images as Pivots
Jinyi Hu
Xu Han
Xiaoyuan Yi
Yutong Chen
Wenhao Li
Zhiyuan Liu
Maosong Sun
DiffM
12
4
0
19 May 2023
Vision-Language Models for Vision Tasks: A Survey
Vision-Language Models for Vision Tasks: A Survey
Jingyi Zhang
Jiaxing Huang
Sheng Jin
Shijian Lu
VLM
34
451
0
03 Apr 2023
GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation
GlueGen: Plug and Play Multi-modal Encoders for X-to-image Generation
Can Qin
Ning Yu
Chen Xing
Shu Zhen Zhang
Zeyuan Chen
Stefano Ermon
Yun Fu
Caiming Xiong
Ran Xu
DiffM
30
19
0
17 Mar 2023
Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
Felix Friedrich
Manuel Brack
Lukas Struppek
Dominik Hintersdorf
P. Schramowski
Sasha Luccioni
Kristian Kersting
25
119
0
07 Feb 2023
On the Power of Foundation Models
On the Power of Foundation Models
Yang Yuan
13
36
0
29 Nov 2022
MURAL: Multimodal, Multitask Retrieval Across Languages
MURAL: Multimodal, Multitask Retrieval Across Languages
Aashi Jain
Mandy Guo
Krishna Srinivasan
Ting-Li Chen
Sneha Kudugunta
Chao Jia
Yinfei Yang
Jason Baldridge
VLM
112
52
0
10 Sep 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
253
4,735
0
24 Feb 2021
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize
  Long-Tail Visual Concepts
Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts
Soravit Changpinyo
P. Sharma
Nan Ding
Radu Soricut
VLM
273
1,077
0
17 Feb 2021
Scaling Up Visual and Vision-Language Representation Learning With Noisy
  Text Supervision
Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
Chao Jia
Yinfei Yang
Ye Xia
Yi-Ting Chen
Zarana Parekh
Hieu H. Pham
Quoc V. Le
Yun-hsuan Sung
Zhen Li
Tom Duerig
VLM
CLIP
293
3,683
0
11 Feb 2021
Previous
12