When does preconditioning help or hurt generalization?

Authors: Shun-ichi Amari, Jimmy Ba, Roger Baker Grosse, Xuechen Li, Atsushi Nitanda, Taiji Suzuki, Denny Wu, Ji Xu

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we empirically compare the generalization error of firstand second-order optimizers in neural network experiments, and observe robust trends matching our theoretical analysis.
Researcher Affiliation Collaboration 1RIKEN CBS, 2University of Toronto, 3Vector Institute, 4Google Research, Brain Team, 5University of Tokyo, 6RIKEN AIP, 7Columbia University amari@brain.riken.jp, {jba,rgrosse,lxuechen,dennywu}@cs.toronto.edu, {nitanda,taiji}@mist.i.u-tokyo.ac.jp, jixu@cs.columbia.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We consider the MNIST and CIFAR-10 datasets.
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits needed to reproduce the experiment.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. It mentions
Experiment Setup Yes For NGD, we use a fixed learning rate of 0.03. Since inverting a parameter-by-parameter-sized Fisher estimate per iteration would be costly, we adopt the Hessian free approach (Martens, 2010) which computes approximate matrix-inverse-vector products using the conjugate gradient (CG) method (Hestenes et al., 1952). For each approximate inversion, we run CG for 200 iterations starting from the solution returned by the previous CG run. For the first run of CG, we initialize the vector from a standard Gaussian, and run CG for 5k iterations. To ensure invertibility, we apply a very small amount of damping (0.00001) in most scenarios.