ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.10006
6
5

On the Modularity of Hypernetworks

23 February 2020
Tomer Galanti
Lior Wolf
ArXivPDFHTML
Abstract

In the context of learning to map an input III to a function hI:X→Rh_I:\mathcal{X}\to \mathbb{R}hI​:X→R, two alternative methods are compared: (i) an embedding-based method, which learns a fixed function in which III is encoded as a conditioning signal e(I)e(I)e(I) and the learned function takes the form hI(x)=q(x,e(I))h_I(x) = q(x,e(I))hI​(x)=q(x,e(I)), and (ii) hypernetworks, in which the weights θI\theta_IθI​ of the function hI(x)=g(x;θI)h_I(x) = g(x;\theta_I)hI​(x)=g(x;θI​) are given by a hypernetwork fff as θI=f(I)\theta_I=f(I)θI​=f(I). In this paper, we define the property of modularity as the ability to effectively learn a different function for each input instance III. For this purpose, we adopt an expressivity perspective of this property and extend the theory of Devore et al. 1996 and provide a lower bound on the complexity (number of trainable parameters) of neural networks as function approximators, by eliminating the requirements for the approximation method to be robust. Our results are then used to compare the complexities of qqq and ggg, showing that under certain conditions and when letting the functions eee and fff be as large as we wish, ggg can be smaller than qqq by orders of magnitude. This sheds light on the modularity of hypernetworks in comparison with the embedding-based method. Besides, we show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.

View on arXiv
Comments on this paper