Robust Visual Imitation Learning with Inverse Dynamics Representations

Authors: Siyuan Li, Xun Wang, Rongchang Zuo, Kewu Sun, Lingfei Cui, Jishiyu Ding, Peng Liu, Zhe Ma

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate the proposed approach under various visual perturbations and in diverse visual control tasks.
Researcher Affiliation Collaboration Siyuan Li1*, Xun Wang2*, Rongchang Zuo1, Kewu Sun2, Lingfei Cui3, Jishiyu Ding2, Peng Liu1, Zhe Ma2 1Harbin Institute of Technology 2Intelligent Science & Technology Academy Limited of CASIC 3Institute of Computer Application Technology, Norinco Group
Pseudocode Yes In Appendix A, we provide the pseudocode and the algorithmic details of RILIR.
Open Source Code Yes The code to reproduce these results is available in the supplementary material.
Open Datasets Yes We conduct extensive experiments on a set of visual control tasks in Meta-World domain (Yu et al. 2020) and Deep Mind Control Suite (DMC) (Tassa et al. 2018).
Dataset Splits No The paper mentions using well-known control suites like Meta-World and DMC, but it does not provide specific details on how the data from these suites are split into training, validation, and test sets, either by percentages, sample counts, or references to predefined splits.
Hardware Specification Yes These experiments have been run with A100 GPUs, and each run takes no more than 1 day.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes In Appendix C, we provide the hyperparameters for all the baselines and the proposed approach.