Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
The Value Equivalence Principle for Model-Based Reinforcement Learning
Authors: Christopher Grimm, Andre Barreto, Satinder Singh, David Silver
NeurIPS 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the benefits of value-equivalent model learning with experiments comparing it against more traditional counterparts like maximum likelihood estimation. (Abstract) We now present experiments illustrating the usefulness of the value equivalence principle in practice. |
| Researcher Affiliation | Collaboration | Christopher Grimm Computer Science & Engineering University of Michigan EMAIL André Barreto, Satinder Singh, David Silver Deep Mind EMAIL |
| Pseudocode | No | The paper includes mathematical formulations, such as equation (6) for the value-equivalence loss, but it does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing the source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper mentions using well-known domains like "four rooms [37], catch [25], and cart-pole [4]" for experiments. However, it does not provide specific access information (e.g., URLs, DOIs, or dataset names with formal citations including author and year for direct access) for publicly available datasets used. |
| Dataset Splits | No | The paper states that for some experiments, "we collected 1000 sample transitions" or "10000 sample transitions" (Appendix A.2). However, it does not specify how these samples were split into training, validation, and test sets with percentages, sample counts for validation, or reference to predefined splits. |
| Hardware Specification | No | We did not use any hardware specific setup or software dependencies other than standard libraries for Python (e.g., NumPy, TensorFlow/PyTorch). |
| Software Dependencies | No | We did not use any hardware specific setup or software dependencies other than standard libraries for Python (e.g., NumPy, TensorFlow/PyTorch). (Appendix A.2) |
| Experiment Setup | Yes | All neural networks were trained using Adam optimizer with a learning rate of 0.001. |