Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
On the Inductive Bias of Dropout
Authors: David P. Helmbold, Philip M. Long
JMLR 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper we continue the exploration of dropout as a regularizer pioneered by Wager et al. We focus on linear classification where a convex proxy to the misclassification loss (i.e. the logistic loss used in logistic regression) is minimized. We show: when the dropout-regularized criterion has a unique minimizer, when the dropout-regularization penalty goes to infinity with the weights, and when it remains bounded, that the dropout regularization can be non-monotonic as individual weights increase from 0, and that the dropout regularization penalty may not be convex. Our theoretical study will concern learning a linear classifier via convex optimization. |
| Researcher Affiliation | Collaboration | David P. Helmbold EMAIL Department of Computer Science University of California, Santa Cruz Santa Cruz, CA 95064, USA Philip M. Long EMAIL Microsoft 1020 Enterprise Way Sunnyvale, CA 94089, USA |
| Pseudocode | No | The paper describes mathematical definitions and theoretical analysis using propositions, theorems, and lemmas, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements about releasing source code, nor does it include links to a code repository or mention code in supplementary materials. |
| Open Datasets | No | The paper defines abstract distributions (e.g., P5, P6, P8, P9, P10) for theoretical analysis rather than utilizing publicly available datasets. No links, DOIs, repositories, or formal citations are provided for accessing any dataset. |
| Dataset Splits | No | The paper performs theoretical analysis on mathematically defined distributions rather than empirical experiments on datasets, so the concept of training/test/validation splits is not applicable and not mentioned. |
| Hardware Specification | No | The paper focuses on theoretical analysis and does not describe any experiments that would require specific hardware specifications. |
| Software Dependencies | No | The paper focuses on theoretical analysis and does not describe any experimental implementation details or specific software dependencies with version numbers. |
| Experiment Setup | No | The paper conducts theoretical analysis and does not describe an experimental setup, hyperparameters, or training configurations. |