Signal Recovery with Non-Expansive Generative Network Priors

Authors: Jorio Cocola

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Appendix H we empirically verify the predictions of Theorem 5.4, demonstrating how (a practical variant of) Algorithm 1 recover signals y? in the range of non-expansive generative networks from undersampled noisy measurements.
Researcher Affiliation Academia Jorio Cocola Harvard University jcocola@seas.harvard.edu
Pseudocode Yes Algorithm 1: SUBGRADIENT DESCENT [21]
Open Source Code No The paper states 'Yes' to the question 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?' in the checklist section 3a, but does not provide a direct link or explicit statement of code availability within the main body of the paper.
Open Datasets No The paper does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year, or reference to established benchmark datasets) for a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification Yes All the experiments were run locally on a Apple M1 CPU
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup No The paper describes theoretical conditions for the algorithm's performance but does not provide specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings in the main text.