Temporal Predictive Coding For Model-Based Planning In Latent Space

Authors: Tung D Nguyen, Rui Shu, Tuan Pham, Hung Bui, Stefano Ermon

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our model on a challenging modification of standard DMControl tasks where the background is replaced with natural videos that contain complex but irrelevant information to the planning task. Our experiments show that our model is superior to existing methods in the challenging complex-background setting while remaining competitive with current state-of-the-art models in the standard setting.
Researcher Affiliation Collaboration *Equal contribution 1Vin AI Research 2Stanford University. Correspondence to: Tung Nguyen <v.tungnd13@vinai.io>, Rui Shu <ruishu@stanford.edu>.
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not include an unambiguous statement or a link indicating that the authors are releasing the source code for their methodology.
Open Datasets Yes Control tasks For the standard setting, we test our model on 6 Deep Mind Control (DMC) tasks (Tassa et al., 2018): Cartpole Swingup, Cheetah Run, Walker Run, Pendulum Swingup, Hopper Hop and Cup Catch. In the natural background setting, we replace the background of each data trajectory with a video taken from the kinetics dataset (Kay et al., 2017).
Dataset Splits No The paper states 'We split the original dataset into two separate sets for training and evaluation to also test the generalization of each method.' but does not provide specific details on the dataset split percentages or sample counts for training, validation, or test sets.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details with version numbers (e.g., library names with version numbers like Python 3.8, PyTorch 1.9).
Experiment Setup No The paper does not contain specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer settings, or other system-level training configurations for their proposed model. It only mentions that 'For the baselines, we use the best set of hyperparameters as reported in their paper.'