Data Quality in Imitation Learning

Authors: Suneel Belkhale, Yuchen Cui, Dorsa Sadigh

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We investigate the combined effect of these two key properties in imitation learning theoretically, and we empirically analyze models trained on a variety of different data sources.
Researcher Affiliation Academia Suneel Belkhale Stanford University belkhale@stanford.edu Yuchen Cui Stanford University yuchenc@stanford.edu Dorsa Sadigh Stanford University dorsa@stanford.edu
Pseudocode No No pseudocode or algorithm blocks explicitly labeled as such were found in the paper.
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the methodology described.
Open Datasets Yes In Table 1, we consider single and multi-human datasets from the Square and Can tasks from robomimic [37].
Dataset Splits No The paper does not explicitly provide training/test/validation dataset splits (e.g., percentages, sample counts, or references to predefined splits) needed for reproduction. It mentions 'training' and 'test time' in the context of distribution shift and high/low data regimes, but not specific splits.
Hardware Specification No The paper does not specify the hardware used for running experiments, such as GPU or CPU models, or cloud computing instance types.
Software Dependencies No The paper mentions that 'BC uses an MLP architecture' and 'Transformer architecture results' but does not provide specific version numbers for any software dependencies or libraries used.
Experiment Setup Yes We train Behavior Cloning (BC) with data generated with system noise and policy noise in two environments: PMObstacle... and Square... BC uses an MLP architecture. (Section 5.1). Also, the tables show varied noise levels (e.g., "σs = 0.01", "σp = 0.01") and episode counts ("1000 episodes", "10 episodes").