19
1

Msmsfnet: a multi-stream and multi-scale fusion net for edge detection

Abstract

Edge detection is a long-standing problem in computer vision. Despite the efficiency of existing algorithms, their performance, however, rely heavily on the pre-trained weights of the backbone network on the ImageNet dataset. The use of pre-trained weights in previous methods significantly increases the difficulty to design new models for edge detection without relying on existing well-trained ImageNet models, as pre-training the model on the ImageNet dataset is expensive and becomes compulsory to ensure the fairness of comparison. Besides, the pre-training and fine-tuning strategy is not always useful and sometimes even inaccessible. For instance, the pre-trained weights on the ImageNet dataset are unlikely to be helpful for edge detection in Synthetic Aperture Radar (SAR) images due to strong differences in the statistics between optical images and SAR images. Moreover, no dataset has comparable size to the ImageNet dataset for SAR image processing. In this work, we study the performance achievable by state-of-the-art deep learning based edge detectors in publicly available datasets when they are trained from scratch, and devise a new network architecture, the multi-stream and multi-scale fusion net (msmsfnet), for edge detection. We show in our experiments that by training all models from scratch, our model outperforms state-of-the-art edge detectors in three publicly available datasets. We also demonstrate the efficiency of our model for edge detection in SAR images, where no useful pre-trained weight is available. Finally, We show that our model is able to achieve competitive performance on the BSDS500 dataset when the pre-trained weights are used.

View on arXiv
@article{liu2025_2404.04856,
  title={ Msmsfnet: a multi-stream and multi-scale fusion net for edge detection },
  author={ Chenguang Liu and Chisheng Wang and Feifei Dong and Xiayang Xiao and Xin Su and Chuanhua Zhu and Dejin Zhang and Qingquan Li },
  journal={arXiv preprint arXiv:2404.04856},
  year={ 2025 }
}
Comments on this paper