ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.04728
25
0

Exploring Kernel Transformations for Implicit Neural Representations

7 April 2025
Sheng Zheng
Chaoning Zhang
Dongshen Han
Fachrina Dewi Puspitasari
Xinhong Hao
Yang Yang
H. Shen
ArXivPDFHTML
Abstract

Implicit neural representations (INRs), which leverage neural networks to represent signals by mapping coordinates to their corresponding attributes, have garnered significant attention. They are extensively utilized for image representation, with pixel coordinates as input and pixel values as output. In contrast to prior works focusing on investigating the effect of the model's inside components (activation function, for instance), this work pioneers the exploration of the effect of kernel transformation of input/output while keeping the model itself unchanged. A byproduct of our findings is a simple yet effective method that combines scale and shift to significantly boost INR with negligible computation overhead. Moreover, we present two perspectives, depth and normalization, to interpret the performance benefits caused by scale and shift transformation. Overall, our work provides a new avenue for future works to understand and improve INR through the lens of kernel transformation.

View on arXiv
@article{zheng2025_2504.04728,
  title={ Exploring Kernel Transformations for Implicit Neural Representations },
  author={ Sheng Zheng and Chaoning Zhang and Dongshen Han and Fachrina Dewi Puspitasari and Xinhong Hao and Yang Yang and Heng Tao Shen },
  journal={arXiv preprint arXiv:2504.04728},
  year={ 2025 }
}
Comments on this paper