On the Second-order Convergence Properties of Random Search Methods
Authors: Aurelien Lucchi, Antonio Orvieto, Adamos Solomou
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our algorithm empirically and find good agreements with our theoretical results. |
| Researcher Affiliation | Academia | Aurelien Lucchi Antonio Orvieto Adamos Solomou Department of Computer Science ETH Zurich |
| Pseudocode | Yes | Algorithm 1 TWO-STEP RANDOM SEARCH (RS). Similar to the STP method [6], but we alternate between two perturbation magnitudes: σ1 is set to be optimal for the large gradient case, while σ2 optimal to escape saddles. |
| Open Source Code | Yes | the code for reproducing the experiments is available online5. |
| Open Datasets | No | The paper uses synthetic functions (e.g., 'Function with growing dimension' and 'Rastrigin function') for its experiments, which are generated and not referenced as publicly available datasets with access information. |
| Dataset Splits | No | The paper uses synthetic functions for optimization tasks and does not specify training, validation, or test dataset splits in the conventional sense. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | For each task, the hyperparameters of every method are selected based on a coarse grid search refined by trial and error. We choose to run DFPI for 20 iterations for all the results shown in the paper. |