Accelerating Natural Gradient with Higher-Order Invariance

Authors: Yang Song, Jiaming Song, Stefano Ermon

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7. Experimental Evaluations In this section, we demonstrate the benefit of respecting higher-order invariance through experiments on synthetic optimization problems, deep neural net optimization tasks and policy optimization in deep reinforcement learning.
Researcher Affiliation Academia 1Computer Science Department, Stanford University. Correspondence to: Yang Song <yangsong@cs.stanford.edu>, Jiaming Song <tsong@cs.stanford.edu>, Stefano Ermon <ermon@cs.stanford.edu>.
Pseudocode Yes For geodesic correction, we only need to compute connection-vector products Γµ β γ γβ. This can be done with a similar idea to Hessian-vector products (Pearlmutter, 1994), for which we provide detailed derivations and pseudocodes in Appendix C.
Open Source Code No The paper references third-party code (OpenAI Baselines) but does not state that the authors' own source code for the methodology described in the paper is openly available.
Open Datasets Yes The datasets are CURVES, MNIST and FACES, all of which contain small gray-scale images of various objects, i.e., synthetic curves, hand-written digits and human faces.
Dataset Splits No The paper does not explicitly provide specific dataset split percentages, sample counts, or detailed splitting methodology for training, validation, and test sets in the main text.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like 'TensorFlow' and 'OpenAI Gym' but does not provide specific version numbers for these or other key software components required for replication.
Experiment Setup Yes During training, and β are initialized to 1 and the learning rate is fixed to 0.5.