ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.01179
  4. Cited By
Relating by Contrasting: A Data-efficient Framework for Multimodal
  Generative Models

Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models

2 July 2020
Yuge Shi
Brooks Paige
Philip H. S. Torr
N. Siddharth
    VLM
ArXivPDFHTML

Papers citing "Relating by Contrasting: A Data-efficient Framework for Multimodal Generative Models"

5 / 5 papers shown
Title
Efficiency-oriented approaches for self-supervised speech representation
  learning
Efficiency-oriented approaches for self-supervised speech representation learning
Luis Lugo
Valentin Vielzeuf
SSL
19
1
0
18 Dec 2023
Generalized Product-of-Experts for Learning Multimodal Representations
  in Noisy Environments
Generalized Product-of-Experts for Learning Multimodal Representations in Noisy Environments
Abhinav Joshi
Naman K. Gupta
Jinang Shah
Binod Bhattarai
Ashutosh Modi
Danail Stoyanov
OffRL
22
2
0
07 Nov 2022
Mitigating Modality Collapse in Multimodal VAEs via Impartial
  Optimization
Mitigating Modality Collapse in Multimodal VAEs via Impartial Optimization
Adrián Javaloy
Maryam Meghdadi
Isabel Valera
11
26
0
09 Jun 2022
Multimodal Adversarially Learned Inference with Factorized
  Discriminators
Multimodal Adversarially Learned Inference with Factorized Discriminators
Wenxue Chen
Jianke Zhu
37
3
0
20 Dec 2021
Learning Deep Representations of Fine-grained Visual Descriptions
Learning Deep Representations of Fine-grained Visual Descriptions
Scott E. Reed
Zeynep Akata
Bernt Schiele
Honglak Lee
OCL
VLM
160
804
0
17 May 2016
1