Interpolation can hurt robust generalization even when there is no noise
Authors: Konstantin Donhauser, Alexandru Tifrea, Michael Aerni, Reinhard Heckel, Fanny Yang
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We plot the robust accuracy gain of (a) early-stopped neural networks compared to models at convergence, fit on sanitized (binary 1-3) MNIST that arguably has minimal noise; and ℓ2 regularized estimators compared to interpolators with λ 0 for (b) linear regression with n = 103 and (c) robust logistic regression with n = 103. |
| Researcher Affiliation | Academia | 1ETH Zurich 2Rice University 3Technical University of Munich |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of the described methodology's code. |
| Open Datasets | Yes | sanitized (binary 1-3) MNIST |
| Dataset Splits | No | The paper does not explicitly provide specific training/validation/test dataset splits (percentages, sample counts, or references to predefined splits) in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments in the main text. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with versions) needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions that experimental details are in Appendix B, which is not provided. The main text describes some model and data parameters (e.g., ϵ values, d/n ratios) but does not provide specific training hyperparameters such as learning rate, batch size, or number of epochs. |