Make-An-Agent: A Generalizable Policy Network Generator with Behavior-Prompted Diffusion

Authors: Yongyuan Liang, Tingqiang Xu, Kaizhe Hu, Guangqi Jiang, Furong Huang, Huazhe Xu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate Make-An-Agent, answering the following problems: How does our method compare with other multi-task or learning-to-learn approaches for policy learning, in terms of performance on seen tasks and generalization to unseen tasks? How scalable is our method, and can it be fine-tuned across different domains? Does our method merely memorize policy parameters and trajectories of each task, or can it generate diverse and new behaviors?
Researcher Affiliation Academia 1 Shanghai Qi Zhi Institute 2 University of Maryland, College Park 3 Tsinghua University 4 University of California, San Diego
Pseudocode No The paper describes the methodology with equations and conceptual diagrams but does not provide a formally labeled pseudocode or algorithm block.
Open Source Code Yes i Code, dataset and video are released in https://cheryyunl.github.io/make-an-agent/.
Open Datasets Yes i Code, dataset and video are released in https://cheryyunl.github.io/make-an-agent/.
Dataset Splits No The paper describes training and testing procedures but does not explicitly define a 'validation' dataset split for hyperparameter tuning or model selection.
Hardware Specification Yes All model training are conducted on NVIDIA A40 GPUs.
Software Dependencies No The paper mentions common deep learning frameworks and algorithms but does not specify exact version numbers for software dependencies (e.g., Python, PyTorch).
Experiment Setup Yes Table 1: Hyperparameters for Autoencoder; Table 2: Hyperparameters for Behavior Embedding; Table 3: Hyperparameters for Diffusion Model.