Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Homomorphic Sensing: Sparsity and Noise

Authors: Liangzu Peng, Boshi Wang, Manolis Tsakiris

ICML 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On the algorithmic level, we solve unlabeled compressed sensing by an iterative algorithm validated by synthetic data experiments. Finally, under the unifying homomorphic sensing framework we connect unlabeled sensing to other important practical problems.
Researcher Affiliation Academia 1School of Information Science and Technology, Shanghai Tech University, Shanghai, China. Correspondence to: Liangzu Peng, Boshi Wang, Manolis C. Tsakiris <penglz,wangbsh,EMAIL>.
Pseudocode Yes The iterative update is given as: x(t+1) Proj K x(t) µA sgn(Ax(t+1) y) (17) J the support {i : x(t+1) i = 0} of x(t+1) x(t+1) J argmin x Rn y AJx 1 (18)
Open Source Code No The paper does not include an explicit statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets No Data generation. We generate data by i) randomly sampling the entries of A Rm n from the standard normal distribution N(0, 1), ii) randomly selecting a support of the k-sparse x Rn whose non-zero entries are randomly sampled also from N(0, 1), iii) randomly producing a psparse permutation Π , and iv) computing y = Π Ax .
Dataset Splits No The paper generates synthetic data for its experiments but does not specify train/validation/test splits for a fixed dataset, nor does it refer to predefined splits with citations. It evaluates the algorithm over '100 trials' of newly generated data.
Hardware Specification Yes We evaluate the algorithm with µ := 10 4 and with the number of iterations set to T := 20 on an Intel(R) i7-8650 U, 1.9 GHz, 16 GB machine.
Software Dependencies No The paper mentions using 'an ADMM algorithm implemented in the FOM toolbox of Beck & Guttmann-Beck (2019)' but does not provide specific version numbers for the FOM toolbox or any other software dependencies such as Python, PyTorch, or CUDA.
Experiment Setup Yes We evaluate the algorithm with µ := 10 4 and with the number of iterations set to T := 20 on an Intel(R) i7-8650 U, 1.9 GHz, 16 GB machine. We generate data by i) randomly sampling the entries of A Rm n from the standard normal distribution N(0, 1), ii) randomly selecting a support of the k-sparse x Rn whose non-zero entries are randomly sampled also from N(0, 1), iii) randomly producing a psparse permutation Π , and iv) computing y = Π Ax . In Figure 1a we set p := 0.2m. Keeping m = 1400, n = 2000, k = 25, p = 0.2m fixed, we furthermore evaluated the robustness of the algorithm to noise. We added noise to the measurements y as per the SNR.