The Ingredients of Real World Robotic Reinforcement Learning

Authors: Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand. Results and videos can be found at https://sites.google.com/view/realworld-rl/. In our experimental evaluation, we study how well the R3L system, described in Sections 2 and 4, can learn under realistic settings visual observations, no hand-specified rewards, and no resets.
Researcher Affiliation Academia Henry Zhu 1, Justin Yu 1, Abhishek Gupta 1, Dhruv Shah1, Kristian Hartikainen2, Avi Singh1, Vikash Kumar3, Sergey Levine1 1 University of California, Berkeley 2 University of Oxford 3 University of Washington
Pseudocode Yes A ALGORITHM DETAILS Algorithm 1 Real-World Robotic Reinforcement Learning (R3L)
Open Source Code No The paper provides a link to a project website with results and videos (https://sites.google.com/view/realworld-rl/), but it does not contain an explicit statement about the release of the source code for the methodology described in the paper, nor a direct link to a code repository.
Open Datasets No The paper describes specific tasks and a robotic setup (D Claw) for data collection, indicating they generated their own data for experiments (e.g., "200 goal images, which takes under an hour to collect in the real world for each task"). However, it does not provide any concrete access information, citations, or links for a publicly available or open dataset used for training.
Dataset Splits No The paper describes evaluation procedures and general training parameters but does not specify explicit numerical splits (e.g., percentages or sample counts) for training, validation, and test datasets in a reproducible manner for the overall RL system.
Hardware Specification No The paper mentions the use of a "D Claw robotic hand with an RGB camera" for experiments but does not provide specific details about the computing hardware (e.g., GPU models, CPU types, or memory) used for training the models.
Software Dependencies No The paper lists various components of its system (SAC, VICE, RND, VAE) and their hyperparameters, but it does not specify software dependencies like programming languages or libraries with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes B.0.1 HYPERPARAMETERS General Standard deviation update coefficient Image Sizes [(16, 16, 3), (32, 32, 3), (64, 64, 3)] SAC Learning Rate 3e-4 γ 0.99 Batch Size 256 Convnet Filters [(64, 64, 64), (16, 32, 64)] Stride (2, 2) Kernel Sizes (3, 3) Pooling [Max Pool2D, None] Actor/Critic FC Layers [(512, 512), (256, 256, 256)]