Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning to Look by Self-Prediction

Authors: Matthew Koichi Grimes, Joseph Varughese Modayil, Piotr W Mirowski, Dushyant Rao, Raia Hadsell

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We describe our experimental setup, then present the claims supported by our results. All results are from the trained camera agent, evaluated without the exploration noise used during ε-greedy training. [...] Figure 4: Agent and predictor training. X axis is wall time in seconds, spanning 13 hours of training. [...] Table 1: Target sensor s prediction error at episode end (lower is better).
Researcher Affiliation Industry Matthew Koichi Grimes, Joseph Modayil, Piotr Mirowski, Dushyant Rao, Raia Hadsell Deep Mind London, UK EMAIL
Pseudocode No The paper describes the methodology in prose and includes equations but does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository.
Open Datasets No The paper describes using the MuJoCo physics simulator and a specific hand model, generating data through interaction with this simulated environment. It does not mention using or providing access to any publicly available or open datasets.
Dataset Splits No The paper describes a reinforcement learning setup where data is generated dynamically through interaction with a simulated environment. There are no traditional fixed training, validation, or test dataset splits mentioned, as data is continuously sampled from the replay buffer.
Hardware Specification No Each environment process runs on a machine with 1 CPU and 1.1 GB of RAM. The learner runs on a machine with 2 CPUs, 4.9 GB of RAM, and a TPU. While memory and CPU count are provided, specific CPU or TPU models are not mentioned, which are required for detailed hardware specifications.
Software Dependencies No The paper mentions using the Mu Jo Co physics simulator and the Adam optimizer, but it does not specify version numbers for these or any other software libraries or programming languages.
Experiment Setup Yes The policy network trains with a learning rate of 10-5, while the critic and predictor networks use a learning rate of 10-4. We use Adam (Kingma & Ba, 2015) to minimize the losses. [...] A single learner process samples mini-batches of 256 transitions from the replay buffer.