ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.09779
34
1

Underwater Image Enhancement via Dehazing and Color Restoration

15 September 2024
Chengqin Wu
Shuai Yu
Tuyan Luo
Qiuhua Rao
Qingson Hu
Jingxiang Xu
Lijun Zhang
    DiffM
ArXivPDFHTML
Abstract

Underwater visual imaging is crucial for marine engineering, but it suffers from low contrast, blurriness, and color degradation, which hinders downstream analysis. Existing underwater image enhancement methods often treat the haze and color cast as a unified degradation process, neglecting their inherent independence while overlooking their synergistic relationship. To overcome this limitation, we propose a Vision Transformer (ViT)-based network (referred to as WaterFormer) to improve underwater image quality. WaterFormer contains three major components: a dehazing block (DehazeFormer Block) to capture the self-correlated haze features and extract deep-level features, a Color Restoration Block (CRB) to capture self-correlated color cast features, and a Channel Fusion Block (CFB) that dynamically integrates these decoupled features to achieve comprehensive enhancement. To ensure authenticity, a soft reconstruction layer based on the underwater imaging physics model is included. Further, a Chromatic Consistency Loss and Sobel Color Loss are designed to respectively preserve color fidelity and enhance structural details during network training. Comprehensive experimental results demonstrate that WaterFormer outperforms other state-of-the-art methods in enhancing underwater images.

View on arXiv
@article{wu2025_2409.09779,
  title={ Underwater Image Enhancement via Dehazing and Color Restoration },
  author={ Chengqin Wu and Shuai Yu and Tuyan Luo and Qiuhua Rao and Qingson Hu and Jingxiang Xu and Lijun Zhang },
  journal={arXiv preprint arXiv:2409.09779},
  year={ 2025 }
}
Comments on this paper