PCA Initialization for Approximate Message Passing in Rotationally Invariant Models
Authors: Marco Mondelli, Ramji Venkataramanan
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical simulations show an excellent agreement between AMP results and theoretical predictions, and suggest an interesting open direction on achieving Bayes-optimal performance. The agreement between the practical performance of AMP and the theoretical predictions of state evolution is demonstrated via numerical results for different spectral distributions of W. |
| Researcher Affiliation | Academia | Marco Mondelli IST Austria marco.mondelli@ist.ac.at Ramji Venkataramanan University of Cambridge rv285@cam.ac.uk |
| Pseudocode | No | The paper describes the algorithms using mathematical equations and prose (e.g., equations (3.1)-(3.3) for the symmetric square matrices AMP algorithm), but it does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not contain any statements about releasing source code for the methodology, nor does it provide links to a code repository or mention supplementary materials for code access. |
| Open Datasets | No | The paper describes generating synthetic data based on specified priors and noise distributions (e.g., 'The signal u has a Rademacher prior...', 'W is rotationally invariant'), but it does not mention the use of any publicly available or open datasets, nor does it provide access information for any data. |
| Dataset Splits | No | The paper conducts numerical simulations with synthetically generated data to validate theoretical predictions, but it does not describe specific training, validation, or test dataset splits as typically found in machine learning experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to conduct the numerical simulations. |
| Software Dependencies | No | The paper describes the mathematical models and simulation settings, but it does not specify any software dependencies with version numbers (e.g., Python, libraries, or solvers with their specific versions) used for the implementation or experiments. |
| Experiment Setup | Yes | In the simulations, α is estimated from the largest eigenvalue/singular value of X. In (a), we set n = 8000 and c = 2; in (b), we set n = 4000; and in (c)-(d), we set n = 8000 and γ = 1/2. The signal u has a Rademacher prior, i.e., its entries are i.i.d. and uniform in { -1, 1}. In the rectangular case, the signal v has a Gaussian prior, i.e., it is uniformly distributed on the sphere of radius n. Given these priors, ut is chosen to be the single-iterate posterior mean denoiser given by ut(x) = tanh(µt x/σt,t), where µt and σt,t are the state evolution parameters; these are replaced by consistent estimates in the simulations. For the rectangular case, we choose vt(x) = x. Each experiment is repeated for ntrials = 100 independent runs. |