The Logical Options Framework
Authors: Brandon Araki, Xiao Li, Kiran Vodrahalli, Jonathan Decastro, Micah Fry, Daniela Rus
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate LOF on four tasks in discrete and continuous domains, including a 3D pick-and-place environment. |
| Researcher Affiliation | Collaboration | 1CSAIL, Massachusetts Institute of Technology, Cambridge, MA, USA 2Department of Computer Science, Columbia Univer sity, New York City, NY, USA 3Toyota Research Institute, Cam bridge, MA, USA 4MIT Lincoln Laboratory, Lexington, MA, USA. |
| Pseudocode | Yes | Algorithm 1 Learning and Planning with Logical Options |
| Open Source Code | Yes | Code for the discrete domain experiments is available at https://github.com/braraki/ logical-options-framework. Code for the other domains is available in the supplementary material. |
| Open Datasets | Yes | The second environment is called the reacher domain, from Open AI Gym (Fig. 3d). ... The third en vironment is called the pick-and-place domain, and it is a continuous 3D environment with a robotic Panda arm from Coppelia Sim and Py Rep (James et al., 2019). |
| Dataset Splits | No | The paper describes its experimental environments and tasks, but it does not specify explicit training, validation, or test dataset splits in terms of percentages or sample counts. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions algorithms and frameworks like Q-learning, PPO, Deep-QRM, but does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | No | The implementa tion details are discussed more fully in App. C. |