MetaMorph: Learning Universal Controllers with Transformers

Authors: Agrim Gupta, Linxi Fan, Surya Ganguli, Li Fei-Fei

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate our method Meta Morph in different environments, perform extensive ablation studies of different design choices, test zero-shot generalization to variations in dynamics and kinematics parameters, and demonstrate sample efficient transfer to new morphologies and tasks.
Researcher Affiliation Collaboration Agrim Gupta1, Linxi Fan1,3, Surya Ganguli1,2, Li Fei-Fei1,2 1Stanford University, 2Stanford Institute for Human-Centered Artificial Intelligence 3NVIDIA Corporation {agrim,sganguli,feifeili}@stanford.edu, linxif@nvidia.com
Pseudocode Yes Algorithm 1 Meta Morph: Joint Training of Modular Robots
Open Source Code Yes We have released a Py Torch (Paszke et al., 2019) implementation of Meta Morph on Git Hub (https://github.com/agrimgupta92/metamorph).
Open Datasets Yes We create a training set of 100 robots from the UNIMAL design space (Gupta et al., 2021) (see A.2).
Dataset Splits No No explicit mention of validation dataset splits (e.g., percentages, counts, or predefined splits) for the experiments. The paper describes training and test sets but not a distinct validation set.
Hardware Specification Yes 30 GPU days to train for 100 million iterations on Nvidia RTX 2080
Software Dependencies No We have released a Py Torch (Paszke et al., 2019) implementation of Meta Morph on Git Hub (https://github.com/agrimgupta92/metamorph).
Experiment Setup Yes All hyperparameters for Transformer and PPO are listed in Table 1.