ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.13944
24
3

On the Multidimensional Random Subset Sum Problem

28 July 2022
L. Becchetti
Arthur Carvalho Walraven da Cuhna
A. Clementi
Francesco d’Amore
Hicham Lesfari
Emanuele Natale
Luca Trevisan
ArXivPDFHTML
Abstract

In the Random Subset Sum Problem, given nnn i.i.d. random variables X1,...,XnX_1, ..., X_nX1​,...,Xn​, we wish to approximate any point z∈[−1,1]z \in [-1,1]z∈[−1,1] as the sum of a suitable subset Xi1(z),...,Xis(z)X_{i_1(z)}, ..., X_{i_s(z)}Xi1​(z)​,...,Xis​(z)​ of them, up to error ε\varepsilonε. Despite its simple statement, this problem is of fundamental interest to both theoretical computer science and statistical mechanics. More recently, it gained renewed attention for its implications in the theory of Artificial Neural Networks. An obvious multidimensional generalisation of the problem is to consider nnn i.i.d. ddd-dimensional random vectors, with the objective of approximating every point z∈[−1,1]d\mathbf{z} \in [-1,1]^dz∈[−1,1]d. In 1998, G. S. Lueker showed that, in the one-dimensional setting, n=O(log⁡1ε)n=\mathcal{O}(\log \frac 1\varepsilon)n=O(logε1​) samples guarantee the approximation property with high probability.In this work, we prove that, in ddd dimensions, n=O(d3log⁡1ε⋅(log⁡1ε+log⁡d))n = \mathcal{O}(d^3\log \frac 1\varepsilon \cdot (\log \frac 1\varepsilon + \log d))n=O(d3logε1​⋅(logε1​+logd)) samples suffice for the approximation property to hold with high probability. As an application highlighting the potential interest of this result, we prove that a recently proposed neural network model exhibits universality: with high probability, the model can approximate any neural network within a polynomial overhead in the number of parameters.

View on arXiv
Comments on this paper