Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Implicit Regularization for Group Sparsity
Authors: Jiangyuan Li, Thanh V Nguyen, Chinmay Hegde, Raymond K. W. Wong
ICLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | SIMULATION STUDIES We conduct various experiments on simulated data to support our theory. In Figure 2, we present the recovery error of w on the left, and recovered group magnitudes on the right. |
| Researcher Affiliation | Academia | Texas A&M University New York University EMAIL; EMAIL; EMAIL |
| Pseudocode | Yes | Algorithm 1 Gradient descent with weight normalization |
| Open Source Code | Yes | Code is available on https://github.com/jiangyuan2li/Implicit-Group-Sparsity |
| Open Datasets | No | The paper explicitly states, 'We conduct various experiments on simulated data to support our theory. Following the model in Section 2, we sample the entries of X i.i.d. using Rademacher random variables and the entries of the noise vector ΞΎ i.i.d. under N(0, Ο2).' This indicates data was simulated, not obtained from a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper uses simulated data but does not specify any training, validation, or test dataset splits. It only mentions the total number of observations (n) and dimension (p) for its simulations (e.g., 'we set n = 150 and p = 300'). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or computing infrastructure used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) that were used to run the experiments. |
| Experiment Setup | Yes | In this experiment, we set n = 150 and p = 300. The number of non-zero entries is 9, divided into 3 groups of size 3. We run both Algorithms 1 and 2 with the same initialization Ξ± = 10 6. The step size Ξ³ on u and decreased step size Ξ· on v are both 10 3. |