Weight Agnostic Neural Networks

Authors: Adam Gaier, David Ha

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On a supervised learning domain, we find network architectures that achieve much higher than chance accuracy on MNIST using random weights. Interactive version of this paper at https://weightagnostic.github.io/
Researcher Affiliation Collaboration Adam Gaier Bonn-Rhein-Sieg University of Applied Sciences Inria / CNRS / Université de Lorraine adam.gaier@h-brs.de David Ha Google Brain Tokyo, Japan hadavid@google.com
Pseudocode No The paper provides a diagrammatic overview of the search process in Figure 2 and describes the steps numerically (1), (2), (3), (4). However, it does not present these steps in a formal pseudocode or algorithm block format.
Open Source Code Yes We released a software toolkit not only to facilitate reproduction, but also to further research in this direction. Refer to the Supplementary Materials for more information about the code repository.
Open Datasets Yes We evaluate WANNs on three continuous control tasks. The first, Cart Pole Swing Up... The second task, Bipedal Walker-v2 [10]... The third, Car Racing-v0 [10]... As a proof of concept, we investigate how WANNs perform on the MNIST dataset [56]
Dataset Splits No The paper mentions evaluating performance (e.g.,
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or cloud computing instance specifications).
Software Dependencies No The paper mentions using OpenAI Gym [10] and Keras [14] as part of its setup but does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes In these experiments we used a fixed series of weight values ([−2, −1, −0.5, +0.5, +1, +2]) to decrease the variance between evaluations.