Evolution Gym: A Large-Scale Benchmark for Evolving Soft Robots

Authors: Jagdeep Bhatia, Holly Jackson, Yunsheng Tian, Jie Xu, Wojciech Matusik

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluating the algorithms on our benchmark platform, we observe robots exhibiting increasingly complex behaviors as evolution progresses, with the best evolved designs solving many of our proposed tasks. Additionally, even though robot designs are evolved autonomously from scratch without prior knowledge, they often grow to resemble existing natural creatures while outperforming hand-designed robots. Nevertheless, all tested algorithms fail to find robots that succeed in our hardest environments.
Researcher Affiliation Academia Jagdeep Singh Bhatia MIT CSAIL jagdeep@mit.edu; Holly Jackson MIT CSAIL hjackson@mit.edu; Yunsheng Tian MIT CSAIL yunsheng@csail.mit.edu; Jie Xu MIT CSAIL jiex@csail.mit.edu; Wojciech Matusik MIT CSAIL wojciech@csail.mit.edu
Pseudocode Yes Algorithm 1 Algorithmic framework of robot evolution
Open Source Code Yes Our website with code, environments, documentation, and tutorials is available at http://evogym.csail.mit.edu. ... Evolution Gym will be released fully open-source under the MIT license.
Open Datasets Yes Our website with code, environments, documentation, and tutorials is available at http://evogym.csail.mit.edu. ... Evolution Gym will be released fully open-source under the MIT license.
Dataset Splits No The paper describes a simulation benchmark environment for evolving robots and discusses generations and evaluations but does not specify traditional dataset splits (e.g., percentages or counts for training, validation, and test sets) as it is not based on a static, pre-collected dataset in that manner.
Hardware Specification Yes The evaluations of our baseline algorithms are performed on machines with Intel Xeon CPU @ 2.80GHz * 80 processors on Google Cloud Platform; GPU is not required.
Software Dependencies No The paper mentions several software components like 'GPy Opt package', 'PyTorch-NEAT library', 'neat-python library', and 'Proximal Policy Optimization (PPO)'. However, it does not provide specific version numbers for these dependencies.
Experiment Setup No The paper states, 'See Appendix D for more details on hyperparameters of all the experiments.' This indicates that detailed experimental setup information, such as hyperparameters, is not present in the main text.