Automatic differentiation of nonsmooth iterative algorithms

Authors: Jerome Bolte, Edouard Pauwels, Samuel Vaiter

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4.2 Numerical illustrations We now detail how Figure 2 discussed in the introduction is obtained, and how it illustrates our theoretical results. We consider four scenarios (Ridge, Lasso, Sparse inverse covariance selection and Trend filtering) corresponding to the four columns. For each of them, the first line shows the empirical linear rate of the iterates xk and the second line shows the empirical linear rate of the derivative B B xk. All experiments are repeated 100 times and we report the median along with the first and last deciles.
Researcher Affiliation Academia Jérôme Bolte Toulouse School of Economics, University of Toulouse Capitole. Toulouse, France. Edouard Pauwels IRIT, CNRS, Université de Toulouse. Institut Universitaire de France (IUF). Toulouse, France. Samuel Vaiter CNRS & Université Côte d Azur, Laboratoire J. A. Dieudonné. Nice, France.
Pseudocode Yes Algorithm 1: Algorithmic differentiation of recursion (1), forward and reverse modes
Open Source Code Yes (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] The (self-contained) jupyter notebook to generate the figures is submitted in supllemental material.
Open Datasets No The paper describes problem instances (e.g., Ridge, Lasso) and mentions using 'random data' for repetitions in Figure 2, but it does not provide concrete access information (link, DOI, formal citation) to specific publicly available or open datasets used for training.
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits. It mentions 'All experiments are repeated 100 times' but does not detail data partitioning for training, validation, or testing.
Hardware Specification No The computations were run on standard laptops.
Software Dependencies Yes The code is written in Python 3.9.7 using jax 0.3.13, jaxlib 0.3.10 and numpy 1.22.4. Figures are generated using matplotlib 3.5.2 and numpy 1.22.4.
Experiment Setup Yes The condition number for the Ridge and Lasso experiments are chosen from 10 to 100 on a logarithmic scale. The regularization parameters for Lasso and Sparse Inverse Covariance Selection are chosen so that there is no zero components in the fixed point. In all experiments, the noise level is 1% of the norm of the signal.