51
0

Switch-Based Multi-Part Neural Network

Abstract

This paper introduces decentralized and modular neural network framework designed to enhance the scalability, interpretability, and performance of artificial intelligence (AI) systems. At the heart of this framework is a dynamic switch mechanism that governs the selective activation and training of individual neurons based on input characteristics, allowing neurons to specialize in distinct segments of the data domain. This approach enables neurons to learn from disjoint subsets of data, mimicking biological brain function by promoting task specialization and improving the interpretability of neural network behavior. Furthermore, the paper explores the application of federated learning and decentralized training for real-world AI deployments, particularly in edge computing and distributed environments. By simulating localized training on non-overlapping data subsets, we demonstrate how modular networks can be efficiently trained and evaluated. The proposed framework also addresses scalability, enabling AI systems to handle large datasets and distributed processing while preserving model transparency and interpretability. Finally, we discuss the potential of this approach in advancing the design of scalable, privacy-preserving, and efficient AI systems for diverse applications.

View on arXiv
@article{majumder2025_2504.18241,
  title={ Switch-Based Multi-Part Neural Network },
  author={ Surajit Majumder and Paritosh Ranjan and Prodip Roy and Bhuban Padhan },
  journal={arXiv preprint arXiv:2504.18241},
  year={ 2025 }
}
Comments on this paper