Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sparse Contextual CDF Regression
Authors: Kamyar Azizzadenesheli, William Lu, Anuran Makur, Qian Zhang
TMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we numerically simulate the data generation processes described in Section 1.2 and empirically evaluate the accuracy of our proposed lasso (3) and elastic net (4) estimators on synthetic data. |
| Researcher Affiliation | Collaboration | Kamyar Azizzadenesheli EMAIL Nvidia Corporation. William Lu EMAIL Department of Computer Science Purdue University. Anuran Makur EMAIL Department of Computer Science and School of Electrical and Computer Engineering Purdue University. Qian Zhang EMAIL Department of Statistics Purdue University. |
| Pseudocode | No | The paper describes the mathematical formulations of the lasso, elastic net, and Dantzig selector estimators but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For specific details regarding our implementation, we refer interested readers to our Python code at https://github.com/enchainingrealm/Sparse Contextual CDFRegression. |
| Open Datasets | No | In this section, we numerically simulate the data generation processes described in Section 1.2 and empirically evaluate the accuracy of our proposed lasso (3) and elastic net (4) estimators on synthetic data. |
| Dataset Splits | No | The paper describes using 'n' data samples for numerical simulations but does not specify any training, testing, or validation dataset splits. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments or numerical simulations. |
| Software Dependencies | No | The paper mentions the implementation is in Python code, but it does not specify version numbers for Python or any specific software libraries or dependencies used. |
| Experiment Setup | Yes | For the ℓ2-regularization hyperparameter of ridge regression and the ℓ1-regularization hyperparameter of lasso and elastic net regression, we use λ = 4 p(2/n) log(2d/δ) as specified in Theorems 1 and 3, with δ = 0.001. For experimental convenience, we use various fixed values of λ2 for the elastic net estimator, which we report in Figures 1 and 2. |