Multi-layer State Evolution Under Random Convolutional Design

Authors: Max Daniels, Cedric Gerbelot, Florent Krzakala, Lenka Zdeborová

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theory numerically and observe close agreement between convolutional AMP iterations and its state evolution predictions, as shown in Figure 1 and in Section 5.
Researcher Affiliation Academia Max Daniels Northeastern University daniels.g@northeastern.edu Cédric Gerbelot ENS Paris cedric.gerbelot@ens.fr Florent Krzakala Ide PHIcs Laboratory, EPFL florent.krzakala@epfl.ch Lenka Zdeborová SPOC Laboratory, EPFL lenka.zdeborova@epfl.ch
Pseudocode No The paper describes algorithms but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes Our code can be used as a general purpose library to build compositional models and evaluate AMP and its state evolution. We make this code available at https://github. com/mdnls/conv-ml-amp.git.
Open Datasets Yes As an example, we show in Figure 3 the sizes of convolutional layers used by the DC-GAN architecture to generate LSUN images [Radford et al., 2015, Figure 1].
Dataset Splits No The paper does not explicitly specify training/test/validation splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory) to run its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes In both, the output channel l = 1 generates noisy, compressive linear measurements y = z(1) + for i N(0, σ2) and for dense couplings W (1) ij N(0, 1/n(1)). Layers 2 l 4 use MCC couplings W (l) MCC(Dl, Pl, q, k), where q Pl = nl and Dl = βPl = qnl 1. Channel functions {'(l)} vary across the two experiments. The input prior is PX(x) = N(x; 0, 1) and the problem parameters are q = 10 channels, filter size k = 3, noise level σ2 = 10 4, input dimension n(L) = 5000, and layerwise aspect ratios β(L) = 2 and β(l) = 1 for 2 l < L. Finally, the channel aspect ratio β(1) varies in each experiment.