47
1

Lumina-OmniLV: A Unified Multimodal Framework for General Low-Level Vision

Abstract

We present Lunima-OmniLV (abbreviated as OmniLV), a universal multimodal multi-task framework for low-level vision that addresses over 100 sub-tasks across four major categories: image restoration, image enhancement, weak-semantic dense prediction, and stylization. OmniLV leverages both textual and visual prompts to offer flexible and user-friendly interactions. Built on Diffusion Transformer (DiT)-based generative priors, our framework supports arbitrary resolutions -- achieving optimal performance at 1K resolution -- while preserving fine-grained details and high fidelity. Through extensive experiments, we demonstrate that separately encoding text and visual instructions, combined with co-training using shallow feature control, is essential to mitigate task ambiguity and enhance multi-task generalization. Our findings also reveal that integrating high-level generative tasks into low-level vision models can compromise detail-sensitive restoration. These insights pave the way for more robust and generalizable low-level vision systems.

View on arXiv
@article{pu2025_2504.04903,
  title={ Lumina-OmniLV: A Unified Multimodal Framework for General Low-Level Vision },
  author={ Yuandong Pu and Le Zhuo and Kaiwen Zhu and Liangbin Xie and Wenlong Zhang and Xiangyu Chen and Peng Gao and Yu Qiao and Chao Dong and Yihao Liu },
  journal={arXiv preprint arXiv:2504.04903},
  year={ 2025 }
}
Comments on this paper