Guided evolutionary strategies: augmenting random search with surrogate gradients

Authors: Niru Maheswaranathan, Luke Metz, George Tucker, Dami Choi, Jascha Sohl-Dickstein

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we apply our method to example problems, demonstrating an improvement over both standard evolutionary strategies and first-order methods that directly follow the surrogate gradient. Figure 1b demonstrates the performance of the method on a toy problem, and is discussed in 4.1.
Researcher Affiliation Industry 1Google Research, Brain Team, Mountain View, CA, United States. Correspondence to: Niru Maheswaranathan <nirum@google.com>.
Pseudocode Yes Algorithm 1 Guided Evolutionary Strategies
Open Source Code Yes For a demo of the method, please see: https://github.com/brain-research/guided-evolutionary-strategies
Open Datasets No The paper uses generated data (e.g., 'random quadratic problems', 'eigenvalues of the Hessian', 'synthetic gradients') for its experiments, but it does not provide concrete access information (link, DOI, formal citation) for any publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For this, and all of the results in this paper, we set the hyperparameters as β = 2 and α = 1 2, as described above.