ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03215
45
0

COSINT-Agent: A Knowledge-Driven Multimodal Agent for Chinese Open Source Intelligence

5 March 2025
Wentao Li
Congcong Wang
Xiaoxiao Cui
Zhi Liu
Wei Guo
Lizhen Cui
ArXivPDFHTML
Abstract

Open Source Intelligence (OSINT) requires the integration and reasoning of diverse multimodal data, presenting significant challenges in deriving actionable insights. Traditional approaches, including multimodal large language models (MLLMs), often struggle to infer complex contextual relationships or deliver comprehensive intelligence from unstructured data sources. In this paper, we introduce COSINT-Agent, a knowledge-driven multimodal agent tailored to address the challenges of OSINT in the Chinese domain. COSINT-Agent seamlessly integrates the perceptual capabilities of fine-tuned MLLMs with the structured reasoning power of the Entity-Event-Scene Knowledge Graph (EES-KG). Central to COSINT-Agent is the innovative EES-Match framework, which bridges COSINT-MLLM and EES-KG, enabling systematic extraction, reasoning, and contextualization of multimodal insights. This integration facilitates precise entity recognition, event interpretation, and context retrieval, effectively transforming raw multimodal data into actionable intelligence. Extensive experiments validate the superior performance of COSINT-Agent across core OSINT tasks, including entity recognition, EES generation, and context matching. These results underscore its potential as a robust and scalable solution for advancing automated multimodal reasoning and enhancing the effectiveness of OSINT methodologies.

View on arXiv
@article{li2025_2503.03215,
  title={ COSINT-Agent: A Knowledge-Driven Multimodal Agent for Chinese Open Source Intelligence },
  author={ Wentao Li and Congcong Wang and Xiaoxiao Cui and Zhi Liu and Wei Guo and Lizhen Cui },
  journal={arXiv preprint arXiv:2503.03215},
  year={ 2025 }
}
Comments on this paper