Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning

Authors: Seungyong Moon, Junyoung Yeom, Bumsoo Park, Hyun Oh Song

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments
Researcher Affiliation Collaboration Seungyong Moon1, Junyoung Yeom1, Bumsoo Park2, Hyun Oh Song1 1Seoul National University, 2KRAFTON
Pseudocode Yes Algorithm 1 PPO with achievement distillation
Open Source Code Yes The code can be found at https://github.com/snu-mllab/Achievement-Distillation.
Open Datasets Yes We primarily utilize the Crafter environment as a benchmark to assess the capabilities of an agent in solving MDPs with hierarchical achievements [19].
Dataset Splits No The paper uses a procedurally generated environment (Crafter) where data is collected through interaction, rather than specifying static train/validation/test dataset splits.
Hardware Specification No No specific hardware details such as GPU or CPU models, or cloud computing instance types, were provided for the experiments.
Software Dependencies No No specific software versions for libraries or frameworks (e.g., PyTorch, TensorFlow, Python) were explicitly listed in the paper.
Experiment Setup Yes We provide the implementation details and hyperparameters in Appendix C.