ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.09306
  4. Cited By
Finding Structural Knowledge in Multimodal-BERT

Finding Structural Knowledge in Multimodal-BERT

Annual Meeting of the Association for Computational Linguistics (ACL), 2022
17 March 2022
Victor Milewski
Miryam de Lhoneux
Marie-Francine Moens
ArXiv (abs)PDFHTMLGithub (10★)

Papers citing "Finding Structural Knowledge in Multimodal-BERT"

7 / 7 papers shown
Title
Towards Knowledge-Infused Automated Disease Diagnosis Assistant
Towards Knowledge-Infused Automated Disease Diagnosis AssistantScientific Reports (Sci Rep), 2024
Mohit Tomar
Abhisek Tiwari
Sriparna Saha
MedIm
201
4
0
18 May 2024
Assessment of Pre-Trained Models Across Languages and Grammars
Assessment of Pre-Trained Models Across Languages and GrammarsInternational Joint Conference on Natural Language Processing (IJCNLP), 2023
Alberto Muñoz-Ortiz
David Vilares
Carlos Gómez-Rodríguez
195
4
0
20 Sep 2023
Semantic Composition in Visually Grounded Language Models
Semantic Composition in Visually Grounded Language Models
Rohan Pandey
CoGe
193
1
0
15 May 2023
Measuring Progress in Fine-grained Vision-and-Language Understanding
Measuring Progress in Fine-grained Vision-and-Language UnderstandingAnnual Meeting of the Association for Computational Linguistics (ACL), 2023
Emanuele Bugliarello
Laurent Sartran
Aishwarya Agrawal
Lisa Anne Hendricks
Aida Nematzadeh
VLM
231
30
0
12 May 2023
Cross-modal Attention Congruence Regularization for Vision-Language
  Relation Alignment
Cross-modal Attention Congruence Regularization for Vision-Language Relation AlignmentAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Rohan Pandey
Rulin Shao
Paul Pu Liang
Ruslan Salakhutdinov
Louis-Philippe Morency
191
20
0
20 Dec 2022
MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal
  Contributions in Vision and Language Models & Tasks
MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & TasksAnnual Meeting of the Association for Computational Linguistics (ACL), 2022
Letitia Parcalabescu
Anette Frank
218
49
0
15 Dec 2022
CREPE: Can Vision-Language Foundation Models Reason Compositionally?
CREPE: Can Vision-Language Foundation Models Reason Compositionally?Computer Vision and Pattern Recognition (CVPR), 2022
Zixian Ma
Jerry Hong
Mustafa Omer Gul
Mona Gandhi
Irena Gao
Ranjay Krishna
CoGe
363
180
0
13 Dec 2022
1