Regularization properties of adversarially-trained linear regression
Authors: Antonio Ribeiro, Dave Zachariah, Francis Bach, Thomas Schön
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We confirm our theoretical findings with numerical examples. (Abstract) and Numerical Experiments (Section Title) |
| Researcher Affiliation | Academia | Antônio H. Ribeiro Uppsala University antonio.horta.ribeiro@it.uu.se Dave Zachariah Uppsala University dave.zachariah@it.uu.se Francis Bach PSL Research University, INRIA francis.bach@inria.fr Thomas B. Schön Uppsala University thomas.schon@it.uu.se |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code for reproducing the figures is available in: https://github.com/antonior92/advtrain-linreg |
| Open Datasets | Yes | Regularization paths estimated in the Diabetes dataset [18]. (Figure 1 caption). We illustrate our method on the Diverse MAGIC wheat dataset [44] from the National Institute for Applied Botany. (Section 7). |
| Dataset Splits | No | For Lasso, ridge and adversarial training, we use the best δ or λ available for each method (obtained via grid search). We use a random ξ, since we do not know the true additive noise. Even with this approximation, ℓ -adversarial training performs comparably with Lasso with the regularization parameter set using 5-fold cross-validation doing a full search in the hyperparameter space. While 5-fold cross-validation is mentioned for hyperparameter tuning, there is no explicit description of how the main datasets (e.g., Diabetes, MAGIC wheat) were split into train/validation/test sets with percentages or sample counts. |
| Hardware Specification | No | The paper does not provide any specific hardware details used for running its experiments. |
| Software Dependencies | No | In all the numerical examples the adversarial training solution is implemented by minimizing (2) using CVXPY [42]. The paper mentions CVXPY but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | For Lasso, ridge and adversarial training, we use the best δ or λ available for each method (obtained via grid search). (Section 7). We generate the data synthetically using an isotropic Gaussian feature model (see Section 7) with n = 60 training data points and p = 200 features. (Figure 4 caption). |