Asymptotics of Ridge Regression in Convolutional Models
Authors: Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan A. Rossi, Sundeep Rangan, Alyson K Fletcher
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we validate our theoretical results on simulated data. We generate data using a ground truth convolutional model of the form (3). We use i.i.d. complex normal convolution kernel and noise with different variances. For the data matrix X, we consider two different models: i) i.i.d. complex normal data; and ii) a non-Gaussian autoregressive process of order 1 (an AR(1) process). In both cases we take T = 256, ny = 500 and use different values of nx to create plots of estimation error with respect to δ = ny/nx. |
| Researcher Affiliation | Collaboration | 1Department of Electrical and Computer Engineering, University of California, Los Angeles 2Department of Statistics, University of California, Los Angeles 3Adobe Research 4Department of Electrical and Computer Engineering, New York University. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | No | The paper states: |
| Dataset Splits | No | The paper validates its theoretical results through simulations. It does not mention using standard training/validation/test splits of a publicly available dataset, nor does it define such splits for its simulated data. It describes parameters for generating data for comparison with theoretical predictions. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | We generate data using a ground truth convolutional model of the form (3). We use i.i.d. complex normal convolution kernel and noise with different variances. For the data matrix X, we consider two different models: i) i.i.d. complex normal data; and ii) a non-Gaussian autoregressive process of order 1 (an AR(1) process). In both cases we take T = 256, ny = 500 and use different values of nx to create plots of estimation error with respect to δ = ny/nx. ... The parameter a controls how fast the process is mixing. ... To show this, we use both a Gaussian AR(1) process with var(ξ2 t ) = 0.1 as well as ξt unif({ s, s}) with s = 0.1 to match the variances. In both cases we take a = 0.9 and measurement noise variance σ2 = 0.1. ... In this case the variance of signal and noise are 0.004 and 1 respectively. Figure 1 shows the log of normalized estimation error with respect to δ = ny/nx for three different values of the regularization parameter λ. |