Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning

Deep neural networks (DNNs) excel in computer vision tasks, especially, few-shot learning (FSL), which is increasingly important for generalizing from limited examples. However, DNNs are computationally expensive with scalability issues in real world. Spiking Neural Networks (SNNs), with their event-driven nature and low energy consumption, are particularly efficient in processing sparse and dynamic data, though they still encounter difficulties in capturing complex spatiotemporal features and performing accurate cross-class comparisons. To further enhance the performance and efficiency of SNNs in few-shot learning, we propose a few-shot learning framework based on SNNs, which combines a self-feature extractor module and a cross-feature contrastive module to refine feature representation and reduce power consumption. We apply the combination of temporal efficient training loss and InfoNCE loss to optimize the temporal dynamics of spike trains and enhance the discriminative power. Experimental results show that the proposed FSL-SNN significantly improves the classification performance on the neuromorphic dataset N-Omniglot, and also achieves competitive performance to ANNs on static datasets such as CUB and miniImageNet with low power consumption.
View on arXiv@article{xu2025_2505.07921, title={ Self-cross Feature based Spiking Neural Networks for Efficient Few-shot Learning }, author={ Qi Xu and Junyang Zhu and Dongdong Zhou and Hao Chen and Yang Liu and Jiangrong Shen and Qiang Zhang }, journal={arXiv preprint arXiv:2505.07921}, year={ 2025 } }