Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

A Comparison of Continuous-Time Approximations to Stochastic Gradient Descent

Authors: Stefan Ankirchner, Stefan Perko

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Firstly, we provide a theoretical comparison using Theorems 1, 2 and 3 (see Theorem 6). We will see that the comparison highly depends on the batch size and on the kurtosis of the features (also called independent variables). Secondly, we substantiate the theoretical findings using a numerical example. In this subsection we present results from a numerical experiment confirming the theoretical results presented in Theorem 6.
Researcher Affiliation Academia Stefan Ankirchner EMAIL Institute for Mathematics Friedrich-Schiller-University Jena 07737 Jena, Germany Stefan Perko EMAIL Institute for Mathematics Friedrich-Schiller-University Jena 07737 Jena, Germany
Pseudocode No The paper describes mathematical equations and theoretical derivations, but does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about providing source code or a link to a code repository for the described methodology. The license information provided is for the paper itself, not for its implementation code.
Open Datasets No Suppose we are given an Rd-valued random variable x and an R-valued random variable ε defined on a probability space (Ω, F, P), such that x and ε are independent, Eε = 0, σ2 ε := Eε2 < ∞, the covariance matrix κ of x is positive definite, and x has finite joint fourth moments... Let θ Rd. We define the R-valued random variable y by y = θ , x + ε. Denote the distribution of (x, y) by ν. We call ν the population. We consider data drawn from ν, which follows a linear model. The population is considered unknown to us. Example 2 We study in detail the following two specific settings. (a) We assume that the features are centered Gaussian, that is x N(0, κ). (b) We assume that d = 1, but not that x is Gaussian.
Dataset Splits No The paper describes generating data from a model for numerical experiments rather than using a fixed dataset with predefined training, validation, or test splits. For the Monte Carlo simulation, it states: More precisely, to compute one copy ˆχi we draw BT/h i.i.d. samples from the data-generating model (24) and then perform SGD for T/h steps using a batch of B samples in each step, never using any sample twice.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running its numerical experiments.
Software Dependencies No The paper does not provide specific software dependencies, such as programming languages, libraries, or solvers with version numbers, used for the numerical experiments.
Experiment Setup Yes Here we use time horizons T = 0.5 and T = 2.0, varying distributions of x and initial values θ. We use a Monte Carlo approximation to estimate ER(χh T/h)... For the experiments we have chosen M large enough (between 10^8 and 2 * 10^9)... We consider the learning rates h = 0.5, 0.1, 0.05, 0.01, 0.005, 0.001. Notice that T/h is an integer in each case, where T {0.5, 2.0}. Plotted is the dependence of the weak error... divided by κ (!), on the learning rate h. The model used is y = x + ε with x, ε independent, centered and of variance 1, where ε is Gaussian. Note that in this case we have θ = 1. A table in Section 3.3.2 lists specific settings for T, θ, νx, κ, Kurt x, B, BEq, BGF for various numerical experiments.