Differentiable Quality Diversity

Authors: Matthew Fontaine, Stefanos Nikolaidis

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Results in two QD benchmark domains and in searching the latent space of a Style GAN show that MEGA significantly outperforms state-of-the-art QD algorithms, highlighting DQD s promise for efficient quality diversity optimization when gradient information is available.
Researcher Affiliation Academia Matthew C. Fontaine University of Southern California Los Angeles, CA mfontain@usc.edu Stefanos Nikolaidis University of Southern California Los Angeles, CA nikolaid@usc.edu
Pseudocode Yes Algorithm 1 shows the pseudocode for CMA-MEGA.
Open Source Code Yes Source code is available at https://github.com/icaros-usc/dqd.
Open Datasets Yes The CMA-ME study [19] introduces two variants of the linear projection domain with an objective based on the sphere and Rastrigin functions from the continuous black-box optimization set of benchmarks [29, 31]. We select the robotic arm repertoire domain from previous work [13, 61]. Previous work [20] introduced the problem of exploring the latent space of a generative model directly with a QD algorithm.
Dataset Splits No The paper uses benchmark domains for experiments but does not explicitly describe specific training, validation, and test dataset splits with percentages or sample counts. While it mentions multiple trials and iteration counts, these relate to the experimental runs rather than data partitioning for model training and evaluation.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, memory, or cloud instances) used to run the experiments.
Software Dependencies No The paper mentions using the Pyribs QD library and integrating Adam optimization, Style GAN, and CLIP, but it does not specify exact version numbers for these or any other software components used in the experiments.
Experiment Setup Yes Algorithm 1 provides the input parameters for CMA-MEGA, including 'a desired number of iterations N, a branching population size λ, a learning rate η, and an initial step size for CMA-ES σg', which are specific experimental setup details.