ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.17777
  4. Cited By
Babel: A Scalable Pre-trained Model for Multi-Modal Sensing via Expandable Modality Alignment

Babel: A Scalable Pre-trained Model for Multi-Modal Sensing via Expandable Modality Alignment

25 July 2024
Shenghong Dai
Shiqi Jiang
Yifan Yang
Ting Cao
Mo Li
Suman Banerjee
Lili Qiu
ArXivPDFHTML

Papers citing "Babel: A Scalable Pre-trained Model for Multi-Modal Sensing via Expandable Modality Alignment"

3 / 3 papers shown
Title
OneLLM: One Framework to Align All Modalities with Language
OneLLM: One Framework to Align All Modalities with Language
Jiaming Han
Kaixiong Gong
Yiyuan Zhang
Jiaqi Wang
Kaipeng Zhang
D. Lin
Yu Qiao
Peng Gao
Xiangyu Yue
MLLM
96
102
0
10 Jan 2025
APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal
  Representations
APE: Aligning Pretrained Encoders to Quickly Learn Aligned Multimodal Representations
Elan Rosenfeld
Preetum Nakkiran
Hadi Pouransari
Oncel Tuzel
Fartash Faghri
32
4
0
08 Oct 2022
Ego4D: Around the World in 3,000 Hours of Egocentric Video
Ego4D: Around the World in 3,000 Hours of Egocentric Video
Kristen Grauman
Andrew Westbury
Eugene Byrne
Zachary Chavis
Antonino Furnari
...
Mike Zheng Shou
Antonio Torralba
Lorenzo Torresani
Mingfei Yan
Jitendra Malik
EgoV
212
682
0
13 Oct 2021
1