ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.10469
37
2

Object-Attribute-Relation Representation Based Video Semantic Communication

15 June 2024
Qiyuan Du
Yiping Duan
Qianqian Yang
Xiaoming Tao
Mérouane Debbah
ArXivPDFHTML
Abstract

With the rapid growth of multimedia data volume, there is an increasing need for efficient video transmission in applications such as virtual reality and future video streaming services. Semantic communication is emerging as a vital technique for ensuring efficient and reliable transmission in low-bandwidth, high-noise settings. However, most current approaches focus on joint source-channel coding (JSCC) that depends on end-to-end training. These methods often lack an interpretable semantic representation and struggle with adaptability to various downstream tasks. In this paper, we introduce the use of object-attribute-relation (OAR) as a semantic framework for videos to facilitate low bit-rate coding and enhance the JSCC process for more effective video transmission. We utilize OAR sequences for both low bit-rate representation and generative video reconstruction. Additionally, we incorporate OAR into the image JSCC model to prioritize communication resources for areas more critical to downstream tasks. Our experiments on traffic surveillance video datasets assess the effectiveness of our approach in terms of video transmission performance. The empirical findings demonstrate that our OAR-based video coding method not only outperforms H.265 coding at lower bit-rates but also synergizes with JSCC to deliver robust and efficient video transmission.

View on arXiv
@article{du2025_2406.10469,
  title={ Object-Attribute-Relation Representation Based Video Semantic Communication },
  author={ Qiyuan Du and Yiping Duan and Qianqian Yang and Xiaoming Tao and Mérouane Debbah },
  journal={arXiv preprint arXiv:2406.10469},
  year={ 2025 }
}
Comments on this paper