Matrix Inference and Estimation in Multi-Layer Models

Authors: Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Numerical Experiments We consider the problem of learning the input layer of a two layer neural network as described in Section 2.3. ... The normalized test error for ADAM-MAP, ML-Mat-VAMP and the ML-Mat-VAMP SE are plotted in Fig. 2.
Researcher Affiliation Academia Parthe Pandit Dept. ECE UC, Los Angeles parthepandit@ucla.edu Mojtaba Sahraee-Ardakan Dept. ECE UC, Los Angeles msahraee@ucla.edu Sundeep Rangan Dept. ECE NYU srangan@nyu.edu Philip Schniter Dept. ECE The Ohio State Univ. schniter.1@osu.edu Alyson K. Fletcher Dept. Statistics UC, Los Angeles akfletcher@ucla.edu
Pseudocode Yes Algorithm 1 Multilayer Matrix VAMP (ML-Mat-VAMP)
Open Source Code Yes Code available at https://github.com/parthe/ML-Mat-VAMP.
Open Datasets No The paper describes synthetic data generation ('The weight vectors F1 and F2 are generated as i.i.d. Gaussians with zero mean and unit variance. The input X is also i.i.d. Gaussians with variance 1/Nin so that the average pre-activation has unit variance.') but does not provide access information for a publicly available dataset.
Dataset Splits No The paper states 'We generate 1000 test samples and a variable number of training samples that ranges from 200 to 4000.' but does not explicitly provide details about a validation split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'ADAM optimizer [21] in the Keras package of Tensorflow' but does not specify version numbers for Keras or Tensorflow.
Experiment Setup Yes Our experiment take d = 4 hidden units, Nin = 100 input units, Nout = 1 output unit, sigmoid activations and variable number of samples N. The ADAM algorithm is run for 100 epochs with a learning rate = 0.01. Algorithm 1 run for 20 iterations