19
0

OXSeg: Multidimensional attention UNet-based lip segmentation using semi-supervised lip contours

Abstract

Lip segmentation plays a crucial role in various domains, such as lip synchronization, lipreading, and diagnostics. However, the effectiveness of supervised lip segmentation is constrained by the availability of lip contour in the training phase. A further challenge with lip segmentation is its reliance on image quality , lighting, and skin tone, leading to inaccuracies in the detected boundaries. To address these challenges, we propose a sequential lip segmentation method that integrates attention UNet and multidimensional input. We unravel the micro-patterns in facial images using local binary patterns to build multidimensional inputs. Subsequently, the multidimensional inputs are fed into sequential attention UNets, where the lip contour is reconstructed. We introduce a mask generation method that uses a few anatomical landmarks and estimates the complete lip contour to improve segmentation accuracy. This mask has been utilized in the training phase for lip segmentation. To evaluate the proposed method, we use facial images to segment the upper lips and subsequently assess lip-related facial anomalies in subjects with fetal alcohol syndrome (FAS). Using the proposed lip segmentation method, we achieved a mean dice score of 84.75%, and a mean pixel accuracy of 99.77% in upper lip segmentation. To further evaluate the method, we implemented classifiers to identify those with FAS. Using a generative adversarial network (GAN), we reached an accuracy of 98.55% in identifying FAS in one of the study populations. This method could be used to improve lip segmentation accuracy, especially around Cupid's bow, and shed light on distinct lip-related characteristics of FAS.

View on arXiv
@article{moghaddasi2025_2505.05531,
  title={ OXSeg: Multidimensional attention UNet-based lip segmentation using semi-supervised lip contours },
  author={ Hanie Moghaddasi and Christina Chambers and Sarah N. Mattson and Jeffrey R. Wozniak and Claire D. Coles and Raja Mukherjee and Michael Suttie },
  journal={arXiv preprint arXiv:2505.05531},
  year={ 2025 }
}
Comments on this paper