ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.12997
28
0

All-in-One Transferring Image Compression from Human Perception to Multi-Machine Perception

17 April 2025
Jiancheng Zhao
Xiang Ji
Zhuoxiao Li
Zunian Wan
Weihang Ran
Mingze Ma
Muyao Niu
Yifan Zhan
Cheng-Ching Tseng
Yinqiang Zheng
    VLM
ArXivPDFHTML
Abstract

Efficiently transferring Learned Image Compression (LIC) model from human perception to machine perception is an emerging challenge in vision-centric representation learning. Existing approaches typically adapt LIC to downstream tasks in a single-task manner, which is inefficient, lacks task interaction, and results in multiple task-specific bitstreams. To address these limitations, we propose an asymmetric adaptor framework that supports multi-task adaptation within a single model. Our method introduces a shared adaptor to learn general semantic features and task-specific adaptors to preserve task-level distinctions. With only lightweight plug-in modules and a frozen base codec, our method achieves strong performance across multiple tasks while maintaining compression efficiency. Experiments on the PASCAL-Context benchmark demonstrate that our method outperforms both Fully Fine-Tuned and other Parameter Efficient Fine-Tuned (PEFT) baselines, and validating the effectiveness of multi-vision transferring.

View on arXiv
@article{zhao2025_2504.12997,
  title={ All-in-One Transferring Image Compression from Human Perception to Multi-Machine Perception },
  author={ Jiancheng Zhao and Xiang Ji and Zhuoxiao Li and Zunian Wan and Weihang Ran and Mingze Ma and Muyao Niu and Yifan Zhan and Cheng-Ching Tseng and Yinqiang Zheng },
  journal={arXiv preprint arXiv:2504.12997},
  year={ 2025 }
}
Comments on this paper