b-bit Marginal Regression

Authors: Martin Slawski, Ping Li

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We here provide numerical results supporting/illustrating some of the key points made in the previous sections. We also compare b-bit Marginal Regression to alternative recovery algorithms. Setup. Our simulations follow model (1) with n = 500, s {10, 20, . . ., 50}, σ {0, 1, 2} and b {1, 2}. ... The experiments reveal that what is predicted by the analysis of the comparison of the relative performance of 1-bit and 2-bit measurements for estimating x closely agrees with what is observed empirically, as can be seen in Figure 2.
Researcher Affiliation Academia Martin Slawski Department of Statistics and Biostatistics Department of Computer Science Rutgers University martin.slawski@rutgers.edu Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University pingli@stat.rutgers.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It mentions a "Supplement" for proofs and additional experiments, but no specific statement about code availability.
Open Datasets No The paper does not provide concrete access information for a publicly available or open dataset. Instead, it describes generating synthetic data for its simulations: "Our simulations follow model (1) with n = 500, s {10, 20, . . ., 50}, σ {0, 1, 2} and b {1, 2}. Regarding x , the support and its signs are selected uniformly at random, while the absolute magnitude of the entries corresponding to the support are drawn from the uniform distribution on [β, 2β]..."
Dataset Splits No The paper mentions that "Each possible configuration for s, f and σ is replicated 20 times." but does not specify train, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation setup).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment.
Experiment Setup Yes Our simulations follow model (1) with n = 500, s {10, 20, . . ., 50}, σ {0, 1, 2} and b {1, 2}. Regarding x , the support and its signs are selected uniformly at random, while the absolute magnitude of the entries corresponding to the support are drawn from the uniform distribution on [β, 2β], where β = f (1/λ1,σ) p log(n)/m and m = f 2(1/λ1,σ)2s log n with f {1.5, 3, 4.5, . . ., 12} controlling the signal strength. The resulting signal is then normalized to unit 2-norm. ... For b = 2, we use Lloyd-Max quantization for a N(0, 1)-random variable which is optimal for σ = 0, but not for σ > 0. Each possible configuration for s, f and σ is replicated 20 times.