Survival Instinct in Offline Reinforcement Learning

Authors: Anqi Li, Dipendra Misra, Andrey Kolobov, Ching-An Cheng

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct two group of experiments. In Section 4.1, we conduct large scale experiment, showing that multiple offline RL algorithms can be robust to reward mis-specification on a variety of datasets. In Section 4.2, we experimentally validate the inherent safety of offline RL algorithms stated in Corollary 2.
Researcher Affiliation Collaboration Anqi Li University of Washington Dipendra Misra Microsoft Research Andrey Kolobov Microsoft Research Ching-An Cheng Microsoft Research
Pseudocode Yes Algorithm 1 VI-LCB [31], Algorithm 2 Pessimistic Policy Iteration (PPI) [13], Algorithm 3 Pessimistic Q Iteration (PQI) [13], Algorithm 4 PEVI(S, A, H, D, β, Vmin, Vmax) [12].
Open Source Code Yes Please visit our website https://survival-instinct.github.io for accompanied code and videos.
Open Datasets Yes on the hopper task from D4RL [1], a popular offline RL benchmark, using a state-of-the-art offline RL algorithm ATAC [2]...on dozens of datasets from D4RL [1] and Meta-World [11] benchmarks...We conduct experiments on offline Safety Gymnasium [49].
Dataset Splits No During tuning, we evaluate each combination of hyperparameter(s) for 2 random seeds for D4RL and 3 random seeds for Meta-World. We choose the best-performing values and report the results of these hyperparameters over 10 new random seeds (not including the tuning seeds).
Hardware Specification Yes each run takes around 5.5 hours on an NC4as_T4_v3 Azure virtual machine for D4RL, Meta-World, and Safety Gymnasium experiments. Each run takes around 2.5 hours on an NC6s_v2 or ND6s Azure virtual machine.
Software Dependencies No We use the implementation of ATAC and PSPI from https://github.com/chinganc/light ATAC...We use the implementation of IQL and BC from https://github.com/gwthomas/IQL-Py Torch/, and the implementation of CQL from https://github.com/young-geng/CQL. For decision transformer, we use the implementation from https://github.com/kzl/decision-transformer.
Experiment Setup Yes We use the default values (ones provided by the original paper or by the implementation we use) for most of the hyperparameters for all algorithms. For each algorithm, we tune one or two hyperparameters (with 4 combinations at most) which affect the degree of pessimism... The values of hyperparamters of all algorithms are given in Table 1 4, where the choices of hyperparameters used in tuning are highlighted in blue. The tuned hyperparameter values for all experiments and all algorithms are provided in Table 9 12.