Autonomous Task Sequencing for Customized Curriculum Design in Reinforcement Learning
Authors: Sanmit Narvekar, Jivko Sinapov, Peter Stone
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use our approach to automatically sequence tasks for 3 agents with varying sensing and action capabilities in an experimental domain, and show that our method produces curricula customized for each agent that improve performance relative to learning from scratch or using a different agent s curriculum. |
| Researcher Affiliation | Academia | Sanmit Narvekar, Jivko Sinapov, and Peter Stone Department of Computer Science, University of Texas at Austin {sanmit, jsinapov, pstone}@cs.utexas.edu |
| Pseudocode | Yes | Algorithm 1 GENERATECURRICULUM(Mt, π, β, δ, ϵ) and Algorithm 2 RECURSETASKSELECT(M, π, β, ϵ, C) |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper describes a custom-built grid world domain ('The target task Mt was a 10x10 grid world...'), which is a simulated environment rather than a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper does not provide specific training, validation, or test dataset splits in the traditional sense, as it operates in a simulated reinforcement learning environment where agents learn directly. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions learning algorithms (Sarsa(λ), value function transfer, CMAC tile coding) and their parameters (ϵ=0.1, λ=0.9, α=0.1) but does not list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | Each agent was initialized with a uniform random policy, and given an initial learning budget β of 500, which was increased by 500 in each iteration of the loop in Algorithm 1. In order to add a source task, we specified it had to affect the policy by at least ϵ = 0.1. Curriculum generation was terminated when a return δ = 700 was reached. Tasks were identified as solved using the policy convergence method described in Section 4. ... The exploration rate ϵ was set to 0.1, eligibility trace parameter λ to 0.9, and learning rate α to 0.1. |