Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity

Authors: Deniz Oktay, Mehran Mirramezani, Eder Medina, Ryan P Adams

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a digital MNIST task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.
Researcher Affiliation Academia Deniz Oktay, Mehran Mirramezani, Eder Medina, Ryan P. Adams Department of Computer Science Princeton University {doktay,mehranmir,em2368,rpa}@princeton.edu
Pseudocode No The paper describes methods and processes but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide a specific link to source code or an explicit statement about the release of their implementation code.
Open Datasets Yes Our input to the neural network is an image sampled from the MNIST dataset.
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits, exact percentages, or sample counts needed to reproduce the experiment.
Hardware Specification Yes Most computation was done on NVIDIA RTX 2080 GPUs.
Software Dependencies No The paper mentions 'JAX-based' simulator but does not provide specific version numbers for JAX or other key software components like Python.
Experiment Setup Yes The learning rate was 0.0001 M where M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 2 30 30 10 2 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.