Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Memorization With Neural Nets: Going Beyond the Worst Case
Authors: Sjoerd Dirksen, Patrick Finke, Martin Genzel
JMLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify our theoretical result with numerical experiments and additionally investigate the effectiveness of the algorithm on MNIST and CIFAR-10. |
| Researcher Affiliation | Collaboration | Sjoerd Dirksen EMAIL Mathematical Institute Utrecht University 3584 CD Utrecht, Netherlands; Patrick Finke EMAIL Mathematical Institute Utrecht University 3584 CD Utrecht, Netherlands; Martin Genzel EMAIL Merantix Momentum GmbH 13355 Berlin, Germany |
| Pseudocode | Yes | Algorithm 1 Interpolation; Algorithm 2 Interpolation (experiments) |
| Open Source Code | Yes | Code is available at https://github.com/patrickfinke/memo. We use Python 3, Scikit-learn, and Num Py. |
| Open Datasets | Yes | we additionally investigate the effectiveness of the algorithm on MNIST and CIFAR-10. Examining binary classification subproblems of the MNIST data set (Le Cun et al., 1998) |
| Dataset Splits | No | The paper describes experiments on various datasets (Two Moons, MNIST, CIFAR-10) but does not specify explicit training/test/validation splits or methodologies used for splitting the data for its experiments. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | Code is available at https://github.com/patrickfinke/memo. We use Python 3, Scikit-learn, and Num Py. The paper mentions software dependencies like Python 3, Scikit-learn, and NumPy, but it only provides a version number for Python. Specific version numbers are missing for Scikit-learn and NumPy. |
| Experiment Setup | No | The paper describes the algorithms and their performance on datasets but does not explicitly provide details about hyperparameters (e.g., learning rate, batch size, epochs, optimizers) or system-level training settings for the numerical experiments. |