ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2026 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2012.09216
  4. Cited By
MELINDA: A Multimodal Dataset for Biomedical Experiment Method
  Classification

MELINDA: A Multimodal Dataset for Biomedical Experiment Method Classification

AAAI Conference on Artificial Intelligence (AAAI), 2020
16 December 2020
Te-Lin Wu
Shikhar Singh
S. Paul
Gully A. Burns
Nanyun Peng
ArXiv (abs)PDFHTML

Papers citing "MELINDA: A Multimodal Dataset for Biomedical Experiment Method Classification"

9 / 9 papers shown
Order-Preserving Dimension Reduction for Multimodal Semantic Embedding
Order-Preserving Dimension Reduction for Multimodal Semantic Embedding
Chengyu Gong
Gefei Shen
Luanzheng Guo
Nathan R. Tallent
Dongfang Zhao
244
1
0
15 Aug 2024
Medical Vision-Language Pre-Training for Brain Abnormalities
Medical Vision-Language Pre-Training for Brain Abnormalities
Masoud Monajatipoor
Zi-Yi Dou
Aichi Chien
Nanyun Peng
Kai-Wei Chang
VLM
275
3
0
27 Apr 2024
Unified Multi-modal Diagnostic Framework with Reconstruction
  Pre-training and Heterogeneity-combat Tuning
Unified Multi-modal Diagnostic Framework with Reconstruction Pre-training and Heterogeneity-combat Tuning
Yupei Zhang
Li Pan
Qiushi Yang
Tan Li
Zhen Chen
361
4
0
09 Apr 2024
UniDCP: Unifying Multiple Medical Vision-language Tasks via Dynamic
  Cross-modal Learnable Prompts
UniDCP: Unifying Multiple Medical Vision-language Tasks via Dynamic Cross-modal Learnable Prompts
Chenlu Zhan
Yufei Zhang
Yu Lin
Gaoang Wang
Hongwei Wang
VLMMedIm
291
15
0
18 Dec 2023
Medical Vision Language Pretraining: A survey
Medical Vision Language Pretraining: A survey
Prashant Shrestha
Sanskar Amgain
Bidur Khanal
Cristian A. Linte
Binod Bhattarai
VLM
374
30
0
11 Dec 2023
MuG: A Multimodal Classification Benchmark on Game Data with Tabular,
  Textual, and Visual Fields
MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual FieldsConference on Empirical Methods in Natural Language Processing (EMNLP), 2023
Jiaying Lu
Yongchen Qian
Shifan Zhao
Yuanzhe Xi
Carl Yang
VLM
262
8
0
06 Feb 2023
Align, Reason and Learn: Enhancing Medical Vision-and-Language
  Pre-training with Knowledge
Align, Reason and Learn: Enhancing Medical Vision-and-Language Pre-training with KnowledgeACM Multimedia (ACM MM), 2022
Zhihong Chen
Guanbin Li
Xiang Wan
382
102
0
15 Sep 2022
Multi-Modal Masked Autoencoders for Medical Vision-and-Language
  Pre-Training
Multi-Modal Masked Autoencoders for Medical Vision-and-Language Pre-TrainingInternational Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), 2022
Zhihong Chen
Yu Du
Jinpeng Hu
Yang Liu
Guanbin Li
Xiang Wan
Tsung-Hui Chang
328
174
0
15 Sep 2022
Automatic Related Work Generation: A Meta Study
Automatic Related Work Generation: A Meta Study
Xiangci Li
Jessica Ouyang
327
11
0
06 Jan 2022
1
Page 1 of 1