Multi-Game Decision Transformers

Authors: Kuang-Huei Lee, Ofir Nachum, Mengjiao (Sherry) Yang, Lisa Lee, Daniel Freeman, Sergio Guadarrama, Ian Fischer, Winnie Xu, Eric Jang, Henryk Michalewski, Igor Mordatch

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Specifically, we show that a single transformer-based model with a single set of weights trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance.
Researcher Affiliation Industry Kuang-Huei Lee Ofir Nachum Mengjiao Yang Lisa Lee Daniel Freeman Winnie Xu Sergio Guadarrama Ian Fischer Eric Jang Henryk Michalewski Igor Mordatch Google Research
Pseudocode No The paper describes the model architecture and procedures using text and diagrams (Figure 3 and 4), but it does not include a formal pseudocode block or algorithm listing.
Open Source Code Yes We release the pre-trained models and code to encourage further research in this direction.1 Additional information, videos and code can be seen at sites.google.com/view/multi-game-transformers.
Open Datasets Yes To train the model, we use an existing dataset of Atari trajectories (with quantized returns) introduced in [1]. [1] Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In International Conference on Machine Learning, pages 104 114. PMLR, 2020.
Dataset Splits Yes 41 games are used for training and 5 games are held out for out-of-distribution generalization experiments. ...pretraining DT, CQL, CPC, BERT, and ACL on the full datasets of the 41 training games with 100M steps each, and fine-tuning one model per held-out game using 1% (1M steps) from each game.
Hardware Specification Yes We train all Multi-Game DT models on TPUv4 hardware...
Software Dependencies No The paper mentions 'Jaxline (Babuschkin et al. [7]) framework' and 'LAMB optimizer [81]', but it does not specify version numbers for these software components.
Experiment Setup Yes We train all Multi-Game DT models on TPUv4 hardware and the Jaxline (Babuschkin et al. [7]) framework for 10M steps using the LAMB optimizer [81] with a 3 10 4 learning rate, 4000 steps linear warm-up, no weight decay, gradient clip 1.0, β1 = 0.9 and β2 = 0.999, and batch size 2048. For fine-tuning on novel games, we train for 100k steps with a 10 4 learning rate, 10 2 weight decay and batch size of 256 instead.