Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
True Online Temporal-Difference Learning
Authors: Harm van Seijen, A. Rupam Mahmood, Patrick M. Pilarski, Marlos C. Machado, Richard S. Sutton
JMLR 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this article, we put this hypothesis to the test by performing an extensive empirical comparison. Specifically, we compare the performance of true online TD(λ)/Sarsa(λ) with regular TD(λ)/Sarsa(λ) on random MRPs, a real-world myoelectric prosthetic arm, and a domain from the Arcade Learning Environment. |
| Researcher Affiliation | Collaboration | Harm van Seijen EMAIL Maluuba Research 2000 Peel Street, Montreal, QC Canada, H3A 2W5 A. Rupam Mahmood EMAIL Patrick M. Pilarski EMAIL Marlos C. Machado EMAIL Richard S. Sutton EMAIL Reinforcement Learning and Artificial Intelligence Laboratory University of Alberta 2-21 Athabasca Hall, Edmonton, AB Canada, T6G 2E8 |
| Pseudocode | Yes | Algorithm 1 accumulate TD(λ), Algorithm 2 true online TD(λ), Algorithm 3 true online Sarsa(λ), Algorithm 4 true online TD(λ) with time-dependent step-size, Algorithm 5 true online version of Watkins s Q(λ), Algorithm 6 tabular true online TD(λ). |
| Open Source Code | Yes | 4. The code for the MRP experiments is published online at: https://github.com/armahmood/totd-rndmdp-experiments. 5. The code for the Asterix experiments is published online at: https://github.com/mcmachado/True Online Sarsa. |
| Open Datasets | Yes | The source of the data is a series of manipulation tasks performed by a participant with an amputation, as presented by Pilarski et al. (2013). Additionally, experiments were conducted on a domain from the Arcade Learning Environment (ALE) (Bellemare et al., 2013; Defazio & Graepel, 2014; Mnih et al., 2015), called Asterix. |
| Dataset Splits | No | The paper describes online learning scenarios and evaluation metrics over continuous data streams or full game play, rather than providing explicit train/test/validation splits for a static dataset. For the prosthetic arm data, it states 'mean absolute return error over all 58,000 time steps of learning'. For Asterix, it mentions 'average score per episode while learning for 20 hours (4,320,000 frames)'. |
| Hardware Specification | No | The paper does not provide specific hardware details (like GPU/CPU models) used for running the experiments. It mentions 'Computing resources were provided by Compute Canada through West Grid', which is a general statement about a computing environment without specific specifications. |
| Software Dependencies | Yes | 5. We used ALE version 0.4.4 for our experiments. |
| Experiment Setup | Yes | Specifically, between 0 and 0.1, α is varied according to 10i with i varying from -3 to -1 with steps of 0.2, and from 0.1 to 2.0 (linearly) with steps of 0.1. In addition, λ is varied from 0 to 0.9 with steps of 0.1 and from 0.9 to 1.0 with steps of 0.01. The initial weight vector is the zero vector in all domains. As with the evaluation experiments, we performed a scan over the step-size α and the trace-decay parameter λ. Specifically, we looked at all combinations of α {0.20, 0.50, 0.80, 1.10, 1.40, 1.70, 2.00} and λ {0.00, 0.50, 0.80, 0.90, 0.95, 0.99}... We used a discount factor γ = 0.999 and ϵ-greedy exploration with ϵ = 0.01. The weight vector was initialized to the zero vector. |