Efficiently avoiding saddle points with zero order methods: No gradients required
Authors: Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, Georgios Piliouras
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section we use simulations to verify our theoretical findings. Specifically we are interested in verifying if zero order methods can avoid saddle points as efficiently as first order methods. To do this we use the two dimensional Rastrigin function, a popular benchmark in the non-convex optimization literature. |
| Researcher Affiliation | Academia | Lampros Flokas Department of Computer Science Columbia University New York, NY 10025 lamflokas@cs.columbia.edu Emmanouil V. Vlatakis-Gkaragkounis Department of Computer Science Columbia University New York, NY 10025 emvlatakis@cs.columbia.edu Georgios Piliouras Engineering Systems and Design Singapore University of Technology and Design Singapore georgios@sutd.edu.sg |
| Pseudocode | Yes | Algorithm 1 PAGD(x0) and Algorithm 2 Escape Saddle (ˆx) |
| Open Source Code | No | The paper does not provide an explicit statement about the release of open-source code for the described methodology or a link to a code repository. |
| Open Datasets | No | The experiments use the two-dimensional Rastrigin function and the octopus function, which are mathematical functions rather than datasets that require specific access information. |
| Dataset Splits | No | The paper evaluates performance on mathematical functions (Rastrigin and Octopus functions) and does not describe any specific training, validation, or test dataset splits. |
| Hardware Specification | No | The paper describes simulations but does not provide any specific details about the hardware (CPU, GPU, memory, etc.) used to run these experiments. |
| Software Dependencies | No | The paper does not provide specific details on software dependencies, such as programming languages, libraries, or solvers with their version numbers, used for the experiments. |
| Experiment Setup | Yes | For both gradient descent and approximate gradient descent we used η = 1/(4ℓ). Then for approximate gradient descent we used symmetric differences to approximate the gradients and β = 0.95 as well as h0 = 0.15. Zero order methods use symmetric differencing with h = 0.01 |