31
1

SL2^{2}A-INR: Single-Layer Learnable Activation for Implicit Neural Representation

Abstract

Implicit Neural Representation (INR), leveraging a neural network to transform coordinate input into corresponding attributes, has recently driven significant advances in several vision-related domains. However, the performance of INR is heavily influenced by the choice of the nonlinear activation function used in its multilayer perceptron (MLP) architecture. To date, multiple nonlinearities have been investigated, but current INRs still face limitations in capturing high-frequency components and diverse signal types. We show that these challenges can be alleviated by introducing a novel approach in INR architecture. Specifically, we propose SL2^{2}A-INR, a hybrid network that combines a single-layer learnable activation function with an MLP that uses traditional ReLU activations. Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, and novel view synthesis. Through comprehensive experiments, SL2^{2}A-INR sets new benchmarks in accuracy, quality, and robustness for INR. Our Code is publicly available on~\href{this https URL}{\textcolor{magenta}{GitHub}}.

View on arXiv
@article{heidari2025_2409.10836,
  title={ SL$^{2}$A-INR: Single-Layer Learnable Activation for Implicit Neural Representation },
  author={ Moein Heidari and Reza Rezaeian and Reza Azad and Dorit Merhof and Hamid Soltanian-Zadeh and Ilker Hacihaliloglu },
  journal={arXiv preprint arXiv:2409.10836},
  year={ 2025 }
}
Comments on this paper