ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.05437
30
102

LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents

9 November 2023
Shilong Liu
Hao Cheng
Haotian Liu
Hao Zhang
Feng Li
Tianhe Ren
Xueyan Zou
Jianwei Yang
Hang Su
Jun Zhu
Lei Zhang
Jianfeng Gao
Chun-yue Li
    MLLM
    VLM
ArXivPDFHTML
Abstract

LLaVA-Plus is a general-purpose multimodal assistant that expands the capabilities of large multimodal models. It maintains a skill repository of pre-trained vision and vision-language models and can activate relevant tools based on users' inputs to fulfill real-world tasks. LLaVA-Plus is trained on multimodal instruction-following data to acquire the ability to use tools, covering visual understanding, generation, external knowledge retrieval, and compositions. Empirical results show that LLaVA-Plus outperforms LLaVA in existing capabilities and exhibits new ones. It is distinct in that the image query is directly grounded and actively engaged throughout the entire human-AI interaction sessions, significantly improving tool use performance and enabling new scenarios.

View on arXiv
Comments on this paper