Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems
Authors: Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Numerical Simulations: The paper [8] presented several numerical experiments to assess the performance of EM-VAMP relative to other methods. Here, our goal is to confirm that EM-VAMP s performance matches the SE predictions. |
| Researcher Affiliation | Academia | Alyson K. Fletcher Dept. Statistics UC Los Angeles akfletcher@ucla.edu; Mojtaba Sahraee-Ardakan Dept. EE, UC Los Angeles msahraee@ucla.edu; Sundeep Rangan Dept. ECE, NYU srangan@nyu.edu; Philip Schniter Dept. ECE, The Ohio State Univ. schniter@ece.osu.edu |
| Pseudocode | Yes | Algorithm 1 Adaptive VAMP |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | No | The numerical experiments use synthetic data and an image described as 'An N = 256 256 image of a satellite'. The paper does not provide concrete access information (link, DOI, repository, or proper citation with author/year for public dataset) for a publicly available dataset used for training. |
| Dataset Splits | No | The paper mentions evaluating performance but does not specify exact training, validation, and test dataset splits or cross-validation methods. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | The paper describes parameters of the simulated data (e.g., matrix dimensions, condition number, sparsity level) but does not provide specific algorithmic hyperparameters or system-level training settings like learning rates, batch sizes, or optimizer configurations. |