pith. machine review for the scientific record. sign in

arxiv: 2605.14059 · v1 · submitted 2026-05-13 · ❄️ cond-mat.dis-nn · stat.ML

Recognition: no theorem link

Finite-size scaling of hetero-associative retrieval in continuous-signal-driven Ising spin systems

Authors on Pith no claims yet

Pith reviewed 2026-05-15 05:36 UTC · model grok-4.3

classification ❄️ cond-mat.dis-nn stat.ML
keywords hetero-associative retrievalIsing spin systemsfinite-size scalingcontinuous signalscross-modal memorysleep EEGpseudo-inverse couplingsstorage capacity
0
0 comments X

The pith

A tri-layer Ising hetero-associative network maps continuous signals to spins and retrieves across modalities at an operational capacity near 0.5.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper shows how to embed continuous high-dimensional signals into an Ising spin system for hetero-associative memory using a PCA-whitening and SimHash encoder together with pseudo-inverse couplings. This construction makes the symmetric mixture state unstable so that thermal fluctuations select a single retrieved pattern from one cued layer. The resulting operational capacity follows the expected finite-size correction and extrapolates to an asymptotic value of approximately 0.5. When applied to sleep EEG recordings the model successfully reconstructs the full set of parietal and eye-movement channels from a single noisy frontal channel.

Core claim

By coupling a geometry-preserving continuous-to-Ising encoder to Kanter-Sompolinsky pseudo-inverse memory couplings in a tri-layer system, the equal-weight mixture becomes thermodynamically unstable and thermal fluctuations break the symmetry to select a single global winner. Parallel Little dynamics are required to ignite the cross-modal avalanche from a single cue while sequential Glauber sweeps resolve superpositions. The operational storage capacity obeys the Amit-Gutfreund-Sompolinsky finite-size correction alpha_c(N) = alpha_c(infinity) - c N^{-1/2} and extrapolates to an asymptotic limit of 0.5 under macroscopic-basin retrieval.

What carries the argument

The multilayer Ising framework that couples a PCA-whitening plus SimHash encoder to pseudo-inverse memory couplings, rendering the equal-weight mixture thermodynamically unstable.

If this is right

  • Storage capacity scales as alpha_c(N) = alpha_c(infinity) - c N^{-1/2} with alpha_c(infinity) approximately 0.5.
  • Parallel updates ignite cross-modal retrieval from a single cued layer.
  • Sequential updates resolve symmetric superpositions into a single state.
  • The architecture reconstructs multi-channel sleep states from a single noisy frontal EEG cue.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same encoder-coupling combination could be tested on other continuous data sets such as audio or image features to check whether the 0.5 capacity limit generalizes.
  • Hardware designs for associative memory might exploit the parallel-versus-sequential duality to optimize for either rapid ignition or ambiguity resolution.
  • If biological neural systems use analogous symmetry-breaking mechanisms, the observed capacity scaling might appear in large-scale brain recordings.

Load-bearing premise

The PCA whitening plus SimHash projection preserves enough of the original signal geometry for the pseudo-inverse couplings to create a thermodynamically unstable mixture whose selection by thermal fluctuations produces reliable cross-modal retrieval.

What would settle it

Measuring the retrieval success rate versus network size N on the PhysioNet Sleep-EDF dataset and checking whether the effective capacity follows the predicted linear dependence on N to the power of minus one half, or observing that retrieval collapses when the encoder is replaced by a random projection that does not preserve geometry.

Figures

Figures reproduced from arXiv: 2605.14059 by Andrea Ladiana.

Figure 1
Figure 1. Figure 1: (a) shows that qintra grows monotonically with r and tracks the empirical SimHash prediction computed 0.50 0.62 0.75 0.87 Dataset quality r 0.04 0.06 0.08 0.10 0.12 Intra-class o verla p qintra N=8 N=16 N=32 N=64 N=128 N=256 N=512 SimHash prediction 0.50 0.62 0.75 0.87 Dataset quality r 0.05 0.06 0.07 0.08 0.09 0.10 0.11 Sep aratio n g a p Δq 50 100 Angle (degrees) 0 2000 4000 6000 8000 10000 12000 Frequen… view at source ↗
Figure 2
Figure 2. Figure 2: reports the magnetisation dynamics for pattern µ=0 (helix) averaged over 50 independent runs, starting from a clean cue. The cued layer σ maintains its initial alignment (m ≈ 1), while τ and ϕ converge from random overlap (m ≈ 0) to the correct pattern within 5–10 MC steps (top rows). At low β (bottom row), convergence is dominated by thermal fluctuations, illustrating the break￾down of retrieval as one ap… view at source ↗
Figure 3
Figure 3. Figure 3: (a) extends the analysis to K ∈ {3, 5, 8, 12, 16, 20, 24, 28, 32} on a parametric curve family. Continuous sharpening drives the BCI to near zero across the entire range, confirming that the pseudo-inverse prescription scales gracefully; because J + is recomputed from each pattern set, the fixed-point residual is near-invariant by construction. D. Experiment 3: Pattern reconstruction and basin structure Wi… view at source ↗
Figure 4
Figure 4. Figure 4: FIG. 4 [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 7
Figure 7. Figure 7: reports the mean winner overlap and mar￾gin averaged over 30 independent seeds and all three archetypes, revealing four distinct regimes: a paramag￾netic phase at β ≲ 0.4, where thermal noise overwhelms the local field; a finite-size retrieval crossover at βc ≈ 0.5, where the overlap jumps sharply from ∼ 0 to ∼ 0.7; a retrieval plateau spanning nearly an order of magnitude (1 ≲ β ≲ 7), with near-perfect ov… view at source ↗
Figure 8
Figure 8. Figure 8: FIG. 8 [PITH_FULL_IMAGE:figures/full_fig_p010_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: FIG. 9 [PITH_FULL_IMAGE:figures/full_fig_p011_9.png] view at source ↗
Figure 11
Figure 11. Figure 11: FIG. 11 [PITH_FULL_IMAGE:figures/full_fig_p012_11.png] view at source ↗
Figure 10
Figure 10. Figure 10: FIG. 10 [PITH_FULL_IMAGE:figures/full_fig_p012_10.png] view at source ↗
Figure 12
Figure 12. Figure 12: FIG. 12 [PITH_FULL_IMAGE:figures/full_fig_p013_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: FIG. 13 [PITH_FULL_IMAGE:figures/full_fig_p013_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: FIG. 14 [PITH_FULL_IMAGE:figures/full_fig_p016_14.png] view at source ↗
read the original abstract

Real-world physical signals are continuous and high-dimensional, yet the statistical-mechanics machinery of associative memory operates on discrete Ising spins. We bridge this divide through a multilayer Ising framework that couples a geometry-preserving continuous-to-Ising encoder (PCA whitening composed with SimHash random-hyperplane projection) to Kanter-Sompolinsky pseudo-inverse memory couplings, embedded directly into the local-field equations of a tri-layer hetero-associative system. The pseudo-inverse correction renders the equal-weight mixture state thermodynamically unstable, so that thermal fluctuations break the cross-modal symmetry and select a single global winner. We further establish a dynamical duality: parallel (Little) updates are structurally required to ignite the cross-modal signal avalanche from a single cued layer, whereas sequential (Glauber) sweeps resolve symmetric superpositions. The operational storage capacity obeys the Amit-Gutfreund-Sompolinsky finite-size correction $\alpha_c(N)=\alpha_c(\infty)-c\,N^{-1/2}$, extrapolating to an asymptotic operational limit $\alpha_c(\infty)\approx 0.50$ under macroscopic-basin retrieval. Applied to multi-channel sleep polysomnography (PhysioNet Sleep-EDF), the architecture reconstructs the macroscopic sleep state on parietal EEG and EOG axes from a single noisy frontal-EEG cue, demonstrating cross-modal recall in the presence of quenched biological disorder.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper constructs a tri-layer hetero-associative Ising model that encodes continuous signals via PCA whitening followed by SimHash random-hyperplane projection, then applies Kanter-Sompolinsky pseudo-inverse couplings to the local fields. It argues that these couplings render the equal-weight mixture state thermodynamically unstable so thermal fluctuations select a single macroscopic basin, establishes a dynamical duality between parallel (Little) and sequential (Glauber) dynamics, reports that the operational capacity obeys the Amit-Gutfreund-Sompolinsky finite-size form alpha_c(N) = alpha_c(infinity) - c N^{-1/2} with extrapolated alpha_c(infinity) approx 0.50, and demonstrates reconstruction of parietal EEG/EOG sleep states from a single noisy frontal-EEG cue on PhysioNet Sleep-EDF data.

Significance. If the encoder geometry and mixture instability hold, the work would supply a concrete statistical-mechanics route from continuous high-dimensional signals to discrete associative retrieval, together with a finite-size scaling law and a real-data cross-modal demonstration. The claimed capacity limit of 0.5 and the sleep-state reconstruction would be of interest to both theoretical spin-glass and applied neural-data communities, provided the central stability assumption is placed on firmer footing.

major comments (3)
  1. [Model construction and instability argument] The central claim that the Kanter-Sompolinsky pseudo-inverse renders the equal-weight mixture thermodynamically unstable (so that thermal fluctuations produce reliable single-basin retrieval) is asserted in the abstract and model construction but is not accompanied by an explicit eigenvalue calculation or stability analysis for the SimHash-encoded patterns; without this derivation the subsequent finite-size extrapolation and sleep-data application rest on an unverified operating regime.
  2. [Finite-size scaling section] The reported asymptotic capacity alpha_c(infinity) approx 0.50 is obtained by fitting the Amit-Gutfreund-Sompolinsky form alpha_c(N) = alpha_c(infinity) - c N^{-1/2} to finite-N simulations; the manuscript does not derive the prefactor c independently, report fit uncertainties, or demonstrate that the encoded patterns satisfy the conditions under which the AGS correction applies, rendering the numerical headline result circular with respect to the literature scaling ansatz.
  3. [Application to PhysioNet Sleep-EDF data] The sleep-polysomnography demonstration asserts reconstruction of the macroscopic sleep state from a single noisy frontal-EEG cue, yet supplies neither quantitative performance metrics with error bars, explicit data-exclusion criteria, nor a control showing that the claimed avalanche occurs only when the pseudo-inverse instability is present; these omissions prevent assessment of whether the architecture functions under quenched biological disorder.
minor comments (2)
  1. [Introduction] Notation for the tri-layer architecture and the precise definition of the macroscopic basin should be introduced with a single diagram or table early in the manuscript to improve readability.
  2. [Abstract] The abstract states the scaling form but does not define the constant c or the range of N over which the fit is performed; these details belong in the main text or a supplementary table.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the careful reading of our manuscript and the constructive feedback. We address each of the major comments below, indicating where revisions will be made to strengthen the paper.

read point-by-point responses
  1. Referee: [Model construction and instability argument] The central claim that the Kanter-Sompolinsky pseudo-inverse renders the equal-weight mixture thermodynamically unstable (so that thermal fluctuations produce reliable single-basin retrieval) is asserted in the abstract and model construction but is not accompanied by an explicit eigenvalue calculation or stability analysis for the SimHash-encoded patterns; without this derivation the subsequent finite-size extrapolation and sleep-data application rest on an unverified operating regime.

    Authors: We agree that an explicit stability analysis is necessary to substantiate the claim. In the revised version, we will include a detailed eigenvalue calculation for the stability matrix of the equal-weight mixture state, specifically for patterns encoded via PCA-SimHash. This analysis will demonstrate the negative eigenvalue indicating instability under the pseudo-inverse couplings, thereby confirming the operating regime for the finite-size scaling and data application. revision: yes

  2. Referee: [Finite-size scaling section] The reported asymptotic capacity alpha_c(infinity) approx 0.50 is obtained by fitting the Amit-Gutfreund-Sompolinsky form alpha_c(N) = alpha_c(infinity) - c N^{-1/2} to finite-N simulations; the manuscript does not derive the prefactor c independently, report fit uncertainties, or demonstrate that the encoded patterns satisfy the conditions under which the AGS correction applies, rendering the numerical headline result circular with respect to the literature scaling ansatz.

    Authors: The AGS finite-size correction is a standard result in the literature for associative memory models, and our simulations are consistent with it. However, we acknowledge the need for greater rigor. In the revision, we will report the fit with uncertainties obtained via bootstrap resampling, verify that the pattern overlaps satisfy the conditions for the AGS ansatz (e.g., Gaussian statistics post-encoding), and discuss the origin of the prefactor c within our model. While c is determined from the fit rather than derived a priori, this does not render the result circular as the scaling form is theoretically motivated and the extrapolation provides the asymptotic value. revision: partial

  3. Referee: [Application to PhysioNet Sleep-EDF data] The sleep-polysomnography demonstration asserts reconstruction of the macroscopic sleep state from a single noisy frontal-EEG cue, yet supplies neither quantitative performance metrics with error bars, explicit data-exclusion criteria, nor a control showing that the claimed avalanche occurs only when the pseudo-inverse instability is present; these omissions prevent assessment of whether the architecture functions under quenched biological disorder.

    Authors: We will enhance the application section by providing quantitative metrics such as reconstruction accuracy and state classification performance with standard error bars across multiple recordings. Explicit criteria for data inclusion/exclusion from the PhysioNet Sleep-EDF dataset will be detailed. Additionally, we will include a control analysis comparing the cross-modal avalanche with standard Hebbian couplings versus the pseudo-inverse, demonstrating that the instability is crucial for reliable single-basin retrieval in the presence of biological noise. revision: yes

Circularity Check

1 steps flagged

Asymptotic capacity limit obtained by fitting AGS scaling form to finite-N simulations

specific steps
  1. fitted input called prediction [Abstract]
    "The operational storage capacity obeys the Amit-Gutfreund-Sompolinsky finite-size correction α_c(N)=α_c(∞)-c N^{-1/2}, extrapolating to an asymptotic operational limit α_c(∞)≈0.50 under macroscopic-basin retrieval."

    The asymptotic value is determined by fitting the AGS scaling form (taken from prior literature) to finite-N simulation data for α_c(N); the reported limit and the constant c are therefore direct outputs of the fit to the assumed functional form rather than independently derived predictions.

full rationale

The paper's central numerical result for the infinite-N operational capacity is obtained by fitting the known Amit-Gutfreund-Sompolinsky finite-size scaling form to measured finite-N values from simulations and extrapolating; the reported α_c(∞)≈0.50 is therefore a fitted parameter rather than an independent first-principles derivation. The encoder and pseudo-inverse constructions are presented as enabling the instability of the mixture state, but no explicit reduction to self-definition or unverified self-citation is shown in the provided text. This produces moderate circularity confined to the extrapolation step while the remainder of the architecture is self-contained numerical demonstration.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 1 invented entities

The central claims rest on standard thermodynamic stability arguments for Ising systems with pseudo-inverse couplings and on the assumption that the chosen encoder preserves the geometry needed for macroscopic basins; the numerical capacity limit is obtained by extrapolation rather than closed-form derivation.

free parameters (2)
  • finite-size correction constant c
    Determined by fitting or extrapolation from finite-N simulations to obtain the reported scaling form.
  • asymptotic capacity alpha_c(infinity)
    Extrapolated numerical value approximately 0.50 obtained from the scaling fit rather than derived from first principles.
axioms (2)
  • domain assumption The pseudo-inverse correction renders the equal-weight mixture state thermodynamically unstable so that thermal fluctuations select a single global winner.
    Invoked to explain symmetry breaking in the tri-layer system.
  • domain assumption Parallel Little updates are required to ignite the cross-modal avalanche while sequential Glauber sweeps resolve symmetric superpositions.
    Stated as a dynamical duality without derivation in the abstract.
invented entities (1)
  • tri-layer hetero-associative Ising system with continuous-to-Ising encoder no independent evidence
    purpose: To couple geometry-preserving continuous signals to discrete spin memory via PCA whitening and SimHash projection.
    New architectural construct introduced to bridge continuous signals and Ising associative memory.

pith-pipeline@v0.9.0 · 5540 in / 1824 out tokens · 58207 ms · 2026-05-15T05:36:07.457285+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

40 extracted references · 40 canonical work pages

  1. [1]

    Layerσ is initialised with a (possibly noisy) copy of an archetypeµ; lay- ers τ and ϕ are initialised to random spin noise

    Pattern reconstruction (cross-layer comple- tion).A protocol probing the existence and reach- ability of the attractor basins. Layerσ is initialised with a (possibly noisy) copy of an archetypeµ; lay- ers τ and ϕ are initialised to random spin noise. The system must reconstruct the missing macroscopic state across the uninitialised layers. Because the ini...

  2. [2]

    All layers are simultaneously initialised in a symmetric super- position of two archetypes,s(a) t=0 = sign(w1ξµ1,(a) + w2ξµ2,(a))

    Hard mixture disentanglement.A protocol probing spontaneous symmetry breaking. All layers are simultaneously initialised in a symmetric super- position of two archetypes,s(a) t=0 = sign(w1ξµ1,(a) + w2ξµ2,(a)). The dynamics must spontaneously break 6 the symmetry and select a single, globally consis- tent winner. This is structurally demanding: the mixture...

  3. [3]

    classify

    Easy mixture disentanglement.A baseline pro- tocol that isolates basin stability from symmetry breaking. All layers receive a partial, highly cor- rupted cue drawn from thesamearchetype µ. Be- cause no cross-pattern competition is present, each layer independently recognises its own cue. This control verifies whether retrieval failure stems from an intrin...

  4. [4]

    The pristine cue is instantly overwritten by noise

    Layer σ reads the local field generated byτ and ϕ, which are currently noise, so the field is a random vector. The pristine cue is instantly overwritten by noise. The memory is erased

  5. [5]

    Layer τ reads the now-destroyedσ and the noise fromϕ, and remains noise

  6. [6]

    Cross-modal completion fails because the cue is washed out before it can propagate

    Layer ϕ reads two destroyed layers and remains noise. Cross-modal completion fails because the cue is washed out before it can propagate. Under parallel Little dy- namics, by contrast, all layers compute their next state simultaneouslyfrom the current state. Att = 1, σ reads noise and is temporarily degraded, but at the same instant τ and ϕ read the prist...

  7. [7]

    J. J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proceedings of the National Academy of Sciences79, 2554 (1982)

  8. [8]

    W. A. Little, The existence of persistent states in the brain, Mathematical Biosciences19, 101 (1974)

  9. [9]

    D. J. Amit, H. Gutfreund, and H. Sompolinsky, Storing infinite numbers of patterns in a spin-glass model of neural networks, Physical Review Letters55, 1530 (1985)

  10. [10]

    D. J. Amit, H. Gutfreund, and H. Sompolinsky, Statistical mechanics of neural networks near saturation, Annals of Physics173, 30 (1987)

  11. [11]

    Personnaz, I

    L. Personnaz, I. Guyon, and G. Dreyfus, Collective com- putational properties of neural networks: New learning mechanisms, Physical Review A34, 4217 (1986)

  12. [12]

    Kanter and H

    I. Kanter and H. Sompolinsky, Associative recall of mem- ory without errors, Physical Review A35, 380 (1987). 18

  13. [13]

    Fachechi, E

    A. Fachechi, E. Agliari, and A. Barra, Dreaming neural networks: forgetting spurious memories and reinforcing pure ones, Neural Networks112, 24 (2019)

  14. [14]

    Agliari, F

    E. Agliari, F. Alemanno, A. Barra, and A. Fachechi, Dreaming neural networks: rigorous results, Journal of Statistical Mechanics: Theory and Experiment2019, 083503 (2019)

  15. [15]

    Agliari, F

    E. Agliari, F. Alemanno, M. Aquaro, and A. Fachechi, Regularization, early-stopping and dreaming: a hopfield- like setup to address generalization and overfitting, Neural Networks177, 106389 (2024)

  16. [16]

    Agliari, F

    E. Agliari, F. Alemanno, M. Aquaro, A. Barra, F. Du- rante, and I. Kanter, Hebbian dreaming for small datasets, Neural Networks173, 106174 (2024)

  17. [17]

    Krotov and J

    D. Krotov and J. J. Hopfield, Dense associative memory for pattern recognition, Advances in Neural Information Processing Systems29(2016)

  18. [18]

    Demircigil, J

    M. Demircigil, J. Huck, M. Löwe, S. Upgang, and F. Ver- met, On a model of associative memory with huge storage capacity, Journal of Statistical Physics168, 288 (2017)

  19. [19]

    Ramsauer, B

    H. Ramsauer, B. Schäfl, J. Lehner, P. Seidl, M. Widrich, T. Adler, L. Gruber, M. Holzleitner, M. Pavlović, G. K. Sandve,et al., Hopfield networks is all you need, inInter- national Conference on Learning Representations(2021)

  20. [20]

    Bahri, J

    Y. Bahri, J. Kadmon, J. Pennington, S. S. Schoenholz, J. Sohl-Dickstein, and S. Ganguli, Statistical mechanics of deep learning, Annual review of condensed matter physics 11, 501 (2020)

  21. [21]

    Zdeborová and F

    L. Zdeborová and F. Krząkała, Statistical physics of in- ference: Thresholds and algorithms, Advances in Physics 65, 453 (2016)

  22. [22]

    Kanerva, Sparse distributed memory, MIT press (1988)

    P. Kanerva, Sparse distributed memory, MIT press (1988)

  23. [23]

    Kosko, Bidirectional associative memories, IEEE Trans- actions on Systems, Man, and Cybernetics18, 49 (1988)

    B. Kosko, Bidirectional associative memories, IEEE Trans- actions on Systems, Man, and Cybernetics18, 49 (1988)

  24. [24]

    Barra, P

    A. Barra, P. Contucci, E. Mingione, and D. Tantari, Multi- species mean-field spin-glasses. rigorous results, Annales Henri Poincaré16, 691 (2015)

  25. [25]

    Barra, G

    A. Barra, G. Catania, A. Decelle, and B. Seoane, Thermo- dynamics of bidirectional associative memories, Journal of Physics A: Mathematical and Theoretical56, 205005 (2023)

  26. [26]

    Agliari, A

    E. Agliari, A. Alessandrelli, A. Barra, M. S. Centonze, and F. Ricci-Tersenghi, Generalized hetero-associative neural networks, Journal of Statistical Mechanics: Theory and Experiment2025, 013302 (2025)

  27. [27]

    Alessandrelli, A

    A. Alessandrelli, A. Barra, A. Ladiana, A. Lepre, and F.Ricci-Tersenghi,Supervisedandunsupervised protocols for hetero-associative neural networks, Physica A: Statis- tical Mechanics and its Applications , 130871 (2025)

  28. [28]

    J. Lin, E. Keogh, S. Lonardi, and B. Chiu, A symbolic rep- resentation of time series, with implications for streaming algorithms, Proceedings of the 8th ACM SIGMOD Work- shop on Research Issues in Data Mining and Knowledge Discovery (DMKD) , 2 (2003)

  29. [29]

    Millidge, T

    B. Millidge, T. Salvatori, Y. Song, R. Bogacz, and T. Lukasiewicz, Universal hopfield networks: A general framework for single-shot associative memory models, in Proceedings of the 39th International Conference on Ma- chine Learning (ICML)(PMLR, 2022) pp. 15561–15583

  30. [30]

    H.H.KhanandH.M.Ali,Detectionofduplicateandnear- duplicate content for web crawlers, Journal of Independent Studies and Research – Computing13, 30 (2015)

  31. [31]

    M. S. Charikar, Similarity estimation techniques from rounding algorithms, Proceedings of the thiry-fourth an- nual ACM symposium on Theory of computing , 380 (2002)

  32. [32]

    Kirkpatrick, C

    S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Optimiza- tion by simulated annealing, Science220, 671 (1983)

  33. [33]

    M. X. Goemans and D. P. Williamson, Improved approx- imation algorithms for maximum cut and satisfiability problems using semidefinite programming, Journal of the ACM42, 1115 (1995)

  34. [34]

    Indyk and R

    P. Indyk and R. Motwani, Approximate nearest neighbors: Towards removing the curse of dimensionality, inProceed- ings of the 30th Annual ACM Symposium on Theory of Computing (STOC)(ACM, 1998) pp. 604–613

  35. [35]

    Salakhutdinov and G

    R. Salakhutdinov and G. E. Hinton, Deep Boltzmann machines, inProceedings of the 12th International Confer- ence on Artificial Intelligence and Statistics (AISTATS) (PMLR, 2009) pp. 448–455

  36. [36]

    Alemanno, M

    F. Alemanno, M. Aquaro, I. Kanter, A. Barra, and E. Agliari, Supervised Hebbian learning, Europhysics Let- ters141, 11001 (2023)

  37. [37]

    J. A. Hertz, A. S. Krogh, and R. G. Palmer,Introduction to the Theory of Neural Computation(Addison-Wesley, Redwood City, CA, 1991)

  38. [38]

    B. Kemp, A. Zwinderman, B. Tuk, H. Kamphuisen, and J. Oberye, Analysis of a sleep-dependent neuronal feed- back loop: the slow-wave microcontinuity of the eeg, IEEE Transactions on Biomedical Engineering47, 1185 (2000)

  39. [39]

    Following the legacy R&K scoring of the PhysioNet dataset, we group stages 3 and 4 into a single Deep Sleep (SWS) macro-state, corresponding to the modern AASM N3 stage

  40. [40]

    W. B. Johnson and J. Lindenstrauss, Extensions of Lips- chitz mappings into a Hilbert space, Contemporary Math- ematics26, 189 (1984)