Universality in Learning from Linear Measurements

Authors: Ehsan Abbasi, Fariborz Salehi, Babak Hassibi

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Although Theorem 1 holds for n and m growing to infinity, the result of our numerical simulations in Section 3.2, indicates the validity of universality for values of m and n ranging in the order of hundreds. To validate the result of Theorem 1, we performed numerical simulations under various distributions for the measurement vectors. For our simulations in Figure 1, we use the estimator E{x0, A, Rn, ℓ1} to recover a k-sparse signal x0 under three random ensembles for the measurement vectors {ai}m i=1.
Researcher Affiliation Academia Ehsan Abbasi Department of Electrical Engineering California Institute of Technology Pasadena, CA, 91125 eabbasi@caltech.edu Fariborz Salehi Department of Electrical Engineering California Institute of Technology Pasadena, CA, 91125 fsalehi@caltech.edu Babak Hassibi Department of Electrical Engineering California Institute of Technology Pasadena, CA, 91125 hassibi@caltech.edu
Pseudocode No The paper does not contain any sections or blocks explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code of the methodology described.
Open Datasets No The paper discusses generating measurement vectors from certain distributions (e.g., Gaussian, Bernoulli, χ1) for numerical simulations but does not refer to or provide access information for any named, publicly available datasets.
Dataset Splits No The paper does not provide specific dataset split information (e.g., exact percentages, sample counts, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In the simulations we used vectors of size n = 256. The data is averaged over 10 independent realization of the measurements. In the simulations we used matrices of size n = 40. The data is averaged over 20 independent realization of the measurements. For each trial, we generate a random matrix M Rn n, with i.i.d. standard Gaussian random variables. Σ = MMT will play the role of the covariance matrix of the measurement vectors. For Figure 1a, {ai}m i=1 are drawn independently from the Gaussian distribution N(0, Σ). For the measurement vectors of the Figure 1b, we first generate i.i.d centered bernouli vectors Ber(.8), and multiply each vector by M. For the measurement vectors of the Figure 1c, we first generate i.i.d centered χ1 vectors, and multiply each vector by M.