One-step differentiation of iterative algorithms

Authors: Jerome Bolte, Edouard Pauwels, Samuel Vaiter

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several numerical examples illustrate the well-foundedness of the one-step estimator. 4 Numerical experiments. We illustrate our findings on three different problems.
Researcher Affiliation Academia Jérôme Bolte Toulouse School of Economics, Université Toulouse Capitole, Toulouse, France. Edouard Pauwels Toulouse School of Economics (IUF), Toulouse, France. Samuel Vaiter CNRS & Université Côte d Azur, Laboratoire J. A. Dieudonné. Nice, France.
Pseudocode Yes Algorithm 1: Automatic, Algorithm 2: Implicit, Algorithm 3: One-step (in Table 1), and Figure 1: Implementation of Algorithms 1, 2 and 3 in jax.
Open Source Code No The paper provides code snippets in Figure 1 to illustrate implementation in 'jax', and mentions 'jax' as a framework, but it does not state that the specific code for the methodology described in this paper is open-source or publicly available, nor does it provide a link to such a repository.
Open Datasets Yes Weighted ridge using gradient descent. We consider a weighted ridge problem... on the data set cpusmall provided by Lib SVM [17]
Dataset Splits No The paper does not provide explicit details about training, validation, and test dataset splits for its experiments. For instance, in 'Logistic regression using Newton's algorithm' or 'Interior point solver for quadratic programming', no split information is given.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running its experiments.
Software Dependencies No The paper mentions software like 'jax', 'pytorch', 'Lib SVM [17]', and 'cvxopt [43]', but it does not specify version numbers for these software components, which is required for reproducibility.
Experiment Setup Yes Logistic regression using Newton's algorithm... λ > 0 is a regularization parameter... Newton's method which we implement in jax using backtracking line search (Wolfe condition). and Weighted ridge using gradient descent... F(x, θ) = x − α∇fθ(x) with x0(θ) = 0, and we consider the K-step truncated Jacobian propagation F = F K θ with K = 1/κ where κ is the effective condition number of the Hessian... for two types of step-sizes. Left column: small learning rate 1/L. Right column: big learning rate 2/(µ+L).