Inferring Change Points in High-Dimensional Linear Regression via Approximate Message Passing
Authors: Gabriel Arpino, Xiaoqi Liu, Ramji Venkataramanan
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our theory via numerical experiments, and demonstrate the favorable performance of our estimators on both synthetic data and images. In Section 4, we present experiments on both synthetic data and images, demonstrating the superior performance of AMP compared to other state-of-the-art algorithms for linear regression with change points. |
| Researcher Affiliation | Academia | 1University of Cambridge. Correspondence to: Gabriel Arpino <ga442@cam.ac.uk>, Xiaoqi Liu <xl394@cam.ac.uk>, Ramji Venkataramanan <rv285@cam.ac.uk>. |
| Pseudocode | No | The paper describes the AMP algorithm using mathematical equations (3) and accompanying text, but it does not present a structured pseudocode block or a clearly labeled algorithm. |
| Open Source Code | Yes | A Python implementation of our algorithm and code to run the experiments is available at (Arpino & Liu, 2024). (Arpino, G. and Liu, X. AMP implementation for change point inference in high-dimensional linear regression. https://github.com/gabrielarpino/AMP_chgpt_lin_reg, 2024.) |
| Open Datasets | No | The paper uses "synthetic data" and refers to "a (255, 255) sparse grayscale image used by Schniter & Rangan (2014)". While the image might be traceable via citation, no direct link or clear statement of public availability for the *specific data used* (e.g., rotated versions of the image) is provided, nor for the generated synthetic data. |
| Dataset Splits | Yes | AMP assumes no knowledge of the true sparsity level 0.5 and estimates the sparsity level using CV over a set of values not including the ground truth (details in Appendix E). DCDP, DPDU, and DP each have two hyperparameters: one corresponding to the ℓ1 penalty and the other penalizing the number of change points. We run cross-validation on these hyperparameters for 12, 12, or 42 pairs of values, respectively, using the suggested values from the original papers. |
| Hardware Specification | Yes | The experiment took one hour to complete on an Apple M1 Max chip, whereas competing algorithms did not return an output within 2.5 hours, due to the larger signal dimension compared to Figure 4 (p = 7225 vs p = 200). |
| Software Dependencies | Yes | We compute the Jacobians of these denoisers in (3) using Automatic Differentiation in Python JAX (Bradbury et al., 2018). (Bradbury, J., Frostig, R., Hawkins, P., Johnson, M. J., Leary, C., Maclaurin, D., Necula, G., Paszke, A., Vander Plas, J., Wanderman-Milne, S., and Zhang, Q. JAX: composable transformations of Python+Num Py programs, 2018. 0.3.13.) |
| Experiment Setup | Yes | For each experiment, we run AMP for t 15 iterations, and average over 10 to 20 independent trials. We initialize AMP with f 0(B0) sampled row-wise independently from the prior P B used to define the ensemble state evolution (17) (20). Figures 4, 6, 7 and 8 use Bernoulli-Gaussian priors, as defined in Appendix C.2. Figure 6 uses α = 1/6 and σℓ= 2.5 for ℓ [L]. Figures 4, 7 and 8 use α = 0.5 and σ2 ℓ= δ for ℓ [L]... AMP uses cross validation (CV) over 5 values of ˆα : {0.1, 0.3, 0.45, 0.6, 0.9} which do not contain the true α = 0.5. DCDP, DPDU, and DP each have two hyperparameters: one corresponding to the ℓ1 penalty and the other penalizing the number of change points. We run cross-validation on these hyperparameters for 12, 12, or 42 pairs of values, respectively, using the suggested values from the original papers. |