Deformable Attentive Visual Enhancement for Referring Segmentation Using Vision-Language Model

Image segmentation is a fundamental task in computer vision, aimed at partitioning an image into semantically meaningful regions. Referring image segmentation extends this task by using natural language expressions to localize specific objects, requiring effective integration of visual and linguistic information. In this work, we propose SegVLM, a vision-language model that incorporates architectural improvements to enhance segmentation accuracy and cross-modal alignment. The model integrates squeeze-and-excitation (SE) blocks for dynamic feature recalibration, deformable convolutions for geometric adaptability, and residual connections for deep feature learning. We also introduce a novel referring-aware fusion (RAF) loss that balances region-level alignment, boundary precision, and class imbalance. Extensive experiments and ablation studies demonstrate that each component contributes to consistent performance improvements. SegVLM also shows strong generalization across diverse datasets and referring expression scenarios.
View on arXiv@article{dalaq2025_2505.19242, title={ Deformable Attentive Visual Enhancement for Referring Segmentation Using Vision-Language Model }, author={ Alaa Dalaq and Muzammil Behzad }, journal={arXiv preprint arXiv:2505.19242}, year={ 2025 } }