26
0

HMPE:HeatMap Embedding for Efficient Transformer-Based Small Object Detection

Abstract

Current Transformer-based methods for small object detection continue emerging, yet they have still exhibited significant shortcomings. This paper introduces HeatMap Position Embedding (HMPE), a novel Transformer Optimization technique that enhances object detection performance by dynamically integrating positional encoding with semantic detection information through heatmap-guided adaptivethis http URLalso innovatively visualize the HMPE method, offering clear visualization of embedded information for parameterthis http URLthen create Multi-Scale ObjectBox-Heatmap Fusion Encoder (MOHFE) and HeatMap Induced High-Quality Queries for Decoder (HIDQ) modules. These are designed for the encoder and decoder, respectively, to generate high-quality queries and reduce background noisethis http URLboth heatmap embedding and Linear-Snake Conv(LSConv) feature engineering, we enhance the embedding of massively diverse small object categories and reduced the decoder multihead layers, thereby accelerating both inference andthis http URLthe generalization experiments, our approach outperforme the baseline mAP by 1.9% on the small object dataset (NWPU VHR-10) and by 1.2% on the general dataset (PASCAL VOC). By employing HMPE-enhanced embedding, we are able to reduce the number of decoder layers from eight to a minimum of three, significantly decreasing both inference and training costs.

View on arXiv
@article{zeng2025_2504.13469,
  title={ HMPE:HeatMap Embedding for Efficient Transformer-Based Small Object Detection },
  author={ YangChen Zeng },
  journal={arXiv preprint arXiv:2504.13469},
  year={ 2025 }
}
Comments on this paper