Face Hallucination Using Split-Attention in Split-Attention Network
- SupRCVBM

Recently, attention mechanism has been applied into convolutional neural networks(CNNs) based super-resolution (SR) tasks for exploring internal feature map correlation. However, most of them ignore the correlation between multi-path features channels for coarse-to-fine attention focusing. In this paper, we propose a split-attention in split-attention network (SISN) to fuse internal channel features and external (cross) multi-path features for exploring face structure information. First, internal-feature split attention block maintains the fidelity of facial local details. Then external-internal split attention group provides cross-features interaction to finetune multi-path features for stabilizing facial structure information. External-feature fusion module is designed to fuse face structure and local detail features for preserving the consistency of images from coarse-to-fine. Experimental results demonstrate that the proposed approach consistently and significantly improves the subjective and objective performances for face hallucination over some state-of-the-art methods.
View on arXiv