13
6

Interpolating between boolean and extremely high noisy patterns through Minimal Dense Associative Memories

Abstract

Recently, Hopfield and Krotov introduced the concept of {\em dense associative memories} [DAM] (close to spin-glasses with PP-wise interactions in a disordered statistical mechanical jargon): they proved a number of remarkable features these networks share and suggested their use to (partially) explain the success of the new generation of Artificial Intelligence. Thanks to a remarkable ante-litteram analysis by Baldi \& Venkatesh, among these properties, it is known these networks can handle a maximal amount of stored patterns KK scaling as KNP1K \sim N^{P-1}.\\ In this paper, once introduced a {\em minimal dense associative network} as one of the most elementary cost-functions falling in this class of DAM, we sacrifice this high-load regime -namely we force the storage of {\em solely} a linear amount of patterns, i.e. K=αNK = \alpha N (with α>0\alpha>0)- to prove that, in this regime, these networks can correctly perform pattern recognition even if pattern signal is O(1)O(1) and is embedded in a sea of noise O(N)O(\sqrt{N}), also in the large NN limit. To prove this statement, by extremizing the quenched free-energy of the model over its natural order-parameters (the various magnetizations and overlaps), we derived its phase diagram, at the replica symmetric level of description and in the thermodynamic limit: as a sideline, we stress that, to achieve this task, aiming at cross-fertilization among disciplines, we pave two hegemon routes in the statistical mechanics of spin glasses, namely the replica trick and the interpolation technique.\\ Both the approaches reach the same conclusion: there is a not-empty region, in the noise-TT vs load-α\alpha phase diagram plane, where these networks can actually work in this challenging regime; in particular we obtained a quite high critical (linear) load in the (fast) noiseless case resulting in limβαc(β)=0.65\lim_{\beta \to \infty}\alpha_c(\beta)=0.65.

View on arXiv
Comments on this paper