What Matters for Adversarial Imitation Learning?
Authors: Manu Orsini, Anton Raichuk, Leonard Hussenot, Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, Marcin Andrychowicz
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To tackle this issue, we implement more than 50 of these choices in a generic adversarial imitation learning framework and investigate their impacts in a large-scale study (>500k trained agents) with both synthetic and human-generated demonstrations. We analyze the key results and highlight the most surprising findings. |
| Researcher Affiliation | Collaboration | Manu Orsini , Anton Raichuk , Léonard Hussenot , Damien Vincent, Robert Dadashi, Sertan Girgin, Matthieu Geist, Olivier Bachem, Olivier Pietquin, Marcin Andrychowicz. Google Research, Brain Team. Equal contribution. Univ. de Lille, CNRS, Inria Scool, UMR 9189 CRISt AL. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. Methodological steps are described in the text. |
| Open Source Code | Yes | We release this generic AIL agent, implemented in JAX [25] as part of the Acme [49] framework: https://github.com/deepmind/acme/blob/master/acme/agents/jax/ail. |
| Open Datasets | Yes | For the Gym tasks, we generate demonstrations with a SAC [28] agent trained on the environment reward. For the Adroit environments, we use the expert and human datasets from D4RL [45], which are, respectively, generated by an RL agent and collected from a human operator. ... For all environments, we use 11 demonstration trajectories. Following prior work [14, 31, 46], we subsample expert demonstrations by only using every 20th state-action pair to make the tasks harder. |
| Dataset Splits | No | The paper describes training and evaluation but does not explicitly mention the use of a distinct 'validation' dataset split. It mentions a 'large HP sweep' and 'evaluat[ing] it 10 times through the training' but no specific validation set for model selection or hyperparameter tuning in the traditional sense of a dataset split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or cloud resources). The ethics review also states: "Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [No]" |
| Software Dependencies | No | The paper mentions key software components like JAX [25], Acme [49], and Flax [47]. However, it does not provide specific version numbers for these libraries, which are required for a reproducible description of software dependencies. |
| Experiment Setup | Yes | We created a large HP sweep (57 HPs swept, >120k agents trained) in which each HP is sampled uniformly at random from a discrete set and independently from the other HPs. We manually ensured that the sampling ranges of all HPs are appropriate and cover the optimal values. Then, we analyzed the results of this initial experiment (called wide, detailed description and results in App. G), removed clearly suboptimal options and ran another experiment with the pruned sampling ranges (called main, 43 HPs swept, >250k agents trained, detailed description and results in App. H). The latter experiment serves as the basis for most of the conclusions drawn in this paper but we also run a few additional experiments to investigate some additional questions (App. I and App. J). |