23
0

NeuronSeek: On Stability and Expressivity of Task-driven Neurons

Main:11 Pages
10 Figures
Bibliography:3 Pages
7 Tables
Abstract

Drawing inspiration from our human brain that designs different neurons for different tasks, recent advances in deep learning have explored modifying a network's neurons to develop so-called task-driven neurons. Prototyping task-driven neurons (referred to as NeuronSeek) employs symbolic regression (SR) to discover the optimal neuron formulation and construct a network from these optimized neurons. Along this direction, this work replaces symbolic regression with tensor decomposition (TD) to discover optimal neuronal formulations, offering enhanced stability and faster convergence. Furthermore, we establish theoretical guarantees that modifying the aggregation functions with common activation functions can empower a network with a fixed number of parameters to approximate any continuous function with an arbitrarily small error, providing a rigorous mathematical foundation for the NeuronSeek framework. Extensive empirical evaluations demonstrate that our NeuronSeek-TD framework not only achieves superior stability, but also is competitive relative to the state-of-the-art models across diverse benchmarks. The code is available atthis https URL.

View on arXiv
@article{pei2025_2506.15715,
  title={ NeuronSeek: On Stability and Expressivity of Task-driven Neurons },
  author={ Hanyu Pei and Jing-Xiao Liao and Qibin Zhao and Ting Gao and Shijun Zhang and Xiaoge Zhang and Feng-Lei Fan },
  journal={arXiv preprint arXiv:2506.15715},
  year={ 2025 }
}
Comments on this paper