Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

When Does Gradient Descent with Logistic Loss Find Interpolating Two-Layer Networks?

Authors: Niladri S. Chatterji, Philip M. Long, Peter L. Bartlett

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We study the training of finite-width two-layer smoothed Re LU networks for binary classification using the logistic loss. We show that gradient descent drives the training loss to zero if the initial loss is small enough. When the data satisfies certain cluster and separation conditions and the network is wide enough, we show that one step of gradient descent reduces the loss sufficiently that the first result applies.
Researcher Affiliation Collaboration Niladri S. Chatterji EMAIL Computer Science Department, Stanford University, 353 Jane Stanford Way, Stanford, CA 94305. Philip M. Long EMAIL Google, 1600 Amphitheatre Parkway, Mountain View, CA, 94043. Peter L. Bartlett EMAIL University of California, Berkeley & Google, 367 Evans Hall #3860 Berkeley, CA 94720-3860.
Pseudocode No The paper describes the gradient descent updates mathematically as: 'θ(t+1) := θ(t) αt ∇θL|θ=θ(t)'. It does not provide a distinct pseudocode or algorithm block.
Open Source Code No The paper does not contain an unambiguous statement that the authors are releasing code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets No The paper mentions generating its own data for simulations: 'The training data was for a two-class classification problem. There were 128 random examples drawn from a distribution in which each of two equally likely classes was distributed as a mixture of Gaussians whose centers had an XOR structure...' and 'except with a different, more challenging, data distribution, which we call the shoulders distribution'. It does not provide access information (link, DOI, citation) to a publicly available dataset.
Dataset Splits No The paper mentions 'The training data was for a two-class classification problem. There were 128 random examples...' and later states 'We performed 100 rounds of batch gradient descent to minimize the softmax loss on random training data.' It does not specify any explicit training, validation, or test splits for reproducibility.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing instances used for running its experiments or simulations.
Software Dependencies No The paper describes the algorithms and their performance but does not specify any software dependencies (e.g., libraries, frameworks, or solvers) with version numbers.
Experiment Setup Yes The number p of hidden units per class was 100. The activation functions were Huberized Re LUs with h = 1/p. The weights were initialized using N(0, (4p) 5/4) and the initial step size was (4p) 3/4. (These correspond to the choice β = 1/4 in Theorem 2.) For the other updates, the step-size on iteration t was log2(1/Lt)/p.