Superset Technique for Approximate Recovery in One-Bit Compressed Sensing

Authors: Larkin Flodin, Venkata Gandikota, Arya Mazumdar

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present some empirical results relating to the use of our superset technique in approximate vector recovery for real-valued signals.
Researcher Affiliation Academia Larkin Flodin University of Massachusetts Amherst Amherst, MA 01003 lflodin@cs.umass.edu Venkata Gandikota University of Massachusetts Amherst Amherst, MA 01003 gandikota.venkata@gmail.com Arya Mazumdar University of Massachusetts Amherst Amherst, MA 01003 arya@cs.umass.edu
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code No The paper does not provide explicit statements about releasing source code for the described methodology nor does it include direct links to a code repository.
Open Datasets No The paper describes how random signals are generated for experiments but does not provide access information for a publicly available or open dataset that was used.
Dataset Splits No The paper does not explicitly provide details about training, validation, or test dataset splits; it describes generating random signals for experiments.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or solver names like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes For the solid lines in Figure 1 labeled 4k log n Superset, we again performed 500 trials for each value of (n, m, k) where in each trial we generated a measurement matrix M = rows in total. Each entry of M (1) is a Bernoulli random variable that takes value 1 with probability 1 k+1 and value 0 with probability k k+1;...The entries of M (2) are drawn from N(0, 1). We use a standard group testing decoding (i.e., remove any coordinates that appear in a test with result 0) to determine a superset based on y1 = sign(M (1)x), then use BIHT (again run either until convergence or 1000 iterations) to reconstruct x within the superset using the measurement results y2 = sign(M (2)x). The number of rows in M (1) is taken to be m1 = 4k log10(n) based on the fact that with high probability Ck log n rows for some constant C should be sufficient to recover an O (k)-sized superset, and the remainder m2 = (m m1) of the measurements are used in M (2).