Offline Reinforcement Learning with Behavioral Supervisor Tuning

Authors: Padmanaba Srinivasan, William Knottenbelt

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks. 5.1 Comparison with SOTA Methods We evaluate TD3-BST against the older, well known baselines of TD3-BC [Fujimoto and Gu, 2021], CQL [Kumar et al., 2020], and IQL [Kostrikov et al., 2021b]. 5.3 Ablation Experiments Morse Network Analysis We analyze how well the Morse network can distinguish between dataset tuples and samples from Dperm, permutations of dataset actions, and Duni.
Researcher Affiliation Academia Padmanaba Srinivasan , William Knottenbelt Imperial College London {ps3416, wjk}@imperial.ac.uk
Pseudocode Yes Algorithm 1 TD3-BST Training Procedure Outline. The policy is updated once for every m = 2 critic updates, as is the default in TD3.
Open Source Code No The paper does not provide an explicit statement or link for open-sourcing the code.
Open Datasets Yes We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks.
Dataset Splits No The paper does not specify exact training, validation, or test split percentages or counts. It only mentions using the D4RL benchmark, which typically has predefined splits but the paper does not explicitly state them.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes We provide full hyperparameter details in the appendix.