Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Natural Evolution Strategies
Authors: Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jan Peters, Jürgen Schmidhuber
JMLR 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically validate the new algorithms, to determine how NES algorithms perform compared to state-of-the-art evolution strategies, identifying specific strengths and limitations of the different variants. We conduct a broad series of experiments on standard benchmarks, as well as more specific experiments testing special capabilities. |
| Researcher Affiliation | Collaboration | Daan Wierstra EMAIL Tom Schaul EMAIL Deep Mind Technologies Ltd. Fountain House, 130 Fenchurch Street London, United Kingdom Tobias Glasmachers EMAIL Institute for Neural Computation Universit atsstrasse 150 Ruhr-University Bochum, Germany Yi Sun EMAIL Google Inc. 1600 Amphitheatre Pkwy Mountain View, United States Jan Peters EMAIL Intelligent Autonomous Systems Institute Hochschulstrasse 10 Technische Universit at Darmstadt, Germany J urgen Schmidhuber EMAIL Istituto Dalle Molle di Studi sull Intelligenza Artificiale (IDSIA) University of Lugano (USI)/SUPSI Galleria 2 Manno-Lugano, Switzerland |
| Pseudocode | Yes | Algorithm 1: Canonical Search Gradient algorithm Algorithm 2: Search Gradient algorithm: Multinormal distribution Algorithm 3: Canonical Natural Evolution Strategies Algorithm 4: Adaptation sampling Algorithm 5: Exponential Natural Evolution Strategies (x NES) (multinormal case) Algorithm 6: Separable NES (SNES) |
| Open Source Code | Yes | A Python implementation of all these is available within the open-source machine learning library Py Brain (Schaul et al., 2010), and implementations in different languages can be found at http://www.idsia.ch/~tom/nes.html. |
| Open Datasets | Yes | We evaluate our algorithm on all the benchmark functions of the Black-Box Optimization Benchmarking collection (BBOB) from the GECCO Workshop for Real-Parameter Optimization. The collection consists of 24 noise-free functions (12 unimodal, 12 multimodal; Hansen et al., 2010a) and 30 noisy functions (Hansen et al., 2010b). |
| Dataset Splits | No | The paper uses benchmark functions like the BBOB collection and a neuroevolution task. It describes evaluation criteria such as a "budget of function evaluations (10^5d)" and reaching a "target value fopt + 10k" for these benchmarks, but it does not specify explicit training/test/validation dataset splits in the conventional sense for a static dataset. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU models, or memory specifications used for running experiments. |
| Software Dependencies | No | A Python implementation of all these is available within the open-source machine learning library Py Brain (Schaul et al., 2010). No specific version numbers for Python or Py Brain are provided. |
| Experiment Setup | Yes | Table 1: Default parameter values for x NES, x NES-as and SNES (including the utility function) as a function of problem dimension d. This table provides specific hyperparameters such as λ, ηµ, ησ, ηB. |