Simplicity Bias in Overparameterized Machine Learning
Authors: Yakir Berchenko
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Here we demonstrate that simplicity bias is a major phenomenon to be reckoned with in overparameterized machine learning. In addition to explaining the outcome of simplicity bias, we also study its source: following concrete rigorous examples, we argue that... |
| Researcher Affiliation | Academia | Department of Industrial Engineering and Management, Ben-Gurion University of the Negev P. O. 653, Beer-Sheva 84105, Israel. berchenk@bgu.ac.il |
| Pseudocode | No | The paper describes "Naive Algorithm" in numbered steps within paragraph text but does not provide formally structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper is theoretical and does not mention releasing any source code for its described methodologies. |
| Open Datasets | No | The paper uses abstract examples for theoretical analysis (e.g., "Boolean functions on n variables," "black/white images with n 28 ˆ 28 784 pixels") but does not provide access information (link, DOI, or specific citation) for a publicly available dataset. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental dataset splits (training, validation, test) or cross-validation setups. |
| Hardware Specification | No | The paper is theoretical and does not mention any specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not mention any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not provide specific experimental setup details such as hyperparameters or training configurations for an empirical study. |