ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.02503
  4. Cited By
UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks
  With Large Language Model

UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model

5 August 2024
Zhaowei Li
Wei Wang
Yiqing Cai
Xu Qi
Pengyu Wang
Dong Zhang
Hang Song
Botian Jiang
Zhida Huang
Tao Wang
    AIFin
    LRM
ArXivPDFHTML

Papers citing "UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model"

4 / 4 papers shown
Title
M2-omni: Advancing Omni-MLLM for Comprehensive Modality Support with Competitive Performance
M2-omni: Advancing Omni-MLLM for Comprehensive Modality Support with Competitive Performance
Qingpei Guo
Kaiyou Song
Zipeng Feng
Ziping Ma
Qinglong Zhang
...
Yunxiao Sun
Tai-WeiChang
Jingdong Chen
Ming Yang
Jun Zhou
MLLM
VLM
74
3
0
26 Feb 2025
FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation
FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation
Shuai Yang
Yifan Zhou
Ziwei Liu
Chen Change Loy
VGen
DiffM
49
23
0
19 Mar 2024
CogVideo: Large-scale Pretraining for Text-to-Video Generation via
  Transformers
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
Wenyi Hong
Ming Ding
Wendi Zheng
Xinghan Liu
Jie Tang
DiffM
235
556
0
29 May 2022
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
LAVT: Language-Aware Vision Transformer for Referring Image Segmentation
Zhao Yang
Jiaqi Wang
Yansong Tang
Kai-xiang Chen
Hengshuang Zhao
Philip H. S. Torr
125
308
0
04 Dec 2021
1