ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.15483
  4. Cited By
Weakly Supervised Vision-and-Language Pre-training with Relative
  Representations

Weakly Supervised Vision-and-Language Pre-training with Relative Representations

24 May 2023
Chi Chen
Peng Li
Maosong Sun
Yang Liu
ArXivPDFHTML

Papers citing "Weakly Supervised Vision-and-Language Pre-training with Relative Representations"

4 / 4 papers shown
Title
From Bricks to Bridges: Product of Invariances to Enhance Latent Space
  Communication
From Bricks to Bridges: Product of Invariances to Enhance Latent Space Communication
Irene Cannistraci
Luca Moschella
Marco Fumero
Valentino Maiorca
Emanuele Rodolà
41
12
0
02 Oct 2023
ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
Antonio Norelli
Marco Fumero
Valentino Maiorca
Luca Moschella
Emanuele Rodolà
Francesco Locatello
VLM
79
32
0
04 Oct 2022
BLIP: Bootstrapping Language-Image Pre-training for Unified
  Vision-Language Understanding and Generation
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
Junnan Li
Dongxu Li
Caiming Xiong
S. Hoi
MLLM
BDL
VLM
CLIP
388
4,110
0
28 Jan 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
258
7,337
0
11 Nov 2021
1