Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

Authors: Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter

NeurIPS 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Numerical Simulations: The paper [8] presented several numerical experiments to assess the performance of EM-VAMP relative to other methods. Here, our goal is to confirm that EM-VAMP s performance matches the SE predictions.
Researcher Affiliation Academia Alyson K. Fletcher Dept. Statistics UC Los Angeles EMAIL; Mojtaba Sahraee-Ardakan Dept. EE, UC Los Angeles EMAIL; Sundeep Rangan Dept. ECE, NYU EMAIL; Philip Schniter Dept. ECE, The Ohio State Univ. EMAIL
Pseudocode Yes Algorithm 1 Adaptive VAMP
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets No The numerical experiments use synthetic data and an image described as 'An N = 256 256 image of a satellite'. The paper does not provide concrete access information (link, DOI, repository, or proper citation with author/year for public dataset) for a publicly available dataset used for training.
Dataset Splits No The paper mentions evaluating performance but does not specify exact training, validation, and test dataset splits or cross-validation methods.
Hardware Specification No The paper does not provide specific details regarding the hardware used for the experiments, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper describes parameters of the simulated data (e.g., matrix dimensions, condition number, sparsity level) but does not provide specific algorithmic hyperparameters or system-level training settings like learning rates, batch sizes, or optimizer configurations.