Latent Planning via Expansive Tree Search

Authors: Robert Gieselmann, Florian T. Pokorny

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate the effectiveness of ELAST for solving challenging downstream reaching tasks from visual observations. For that purpose, we use the closed loop control setting described in Sec. 4.4 and measure the performance in terms of average success rate. Moreover, we investigate the impact of different components and hyperparameters of our method on the quality of the computed solutions. The results of our benchmark evaluation are shown in Table 1.
Researcher Affiliation Academia Robert Gieselmann KTH Royal Institute of Technology Stockholm, Sweden robgie@kth.se Florian T. Pokorny KTH Royal Institute of Technology Stockholm, Sweden fpokorny@kth.se
Pseudocode No The paper describes the methods in text and uses flow diagrams (Figure 2, Figure 4) but does not provide a formal pseudocode block.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplemented code repository.
Open Datasets No In this regard, we assume no access to the underlying MDP dynamics, state distance metrics and only provide a fixed-size training dataset composed of random environment interactions. While the paper mentions using the metaworld benchmark [60], it does not provide concrete access information (link, DOI, citation with authors/year) for a specific public dataset or their generated training data.
Dataset Splits No The paper mentions a fixed-size training dataset and evaluating on unseen scenarios, but it does not specify explicit percentages or counts for training, validation, and test splits, nor does it reference predefined validation splits.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See App. A.1.3. We perform all experiments on a single NVIDIA GeForce RTX 2080 Ti with 11GB of VRAM.
Software Dependencies No The paper refers to Appendix A, B, and C for implementation details, but these appendices do not list specific version numbers for software components or libraries (e.g., PyTorch version, specific physics engine versions).
Experiment Setup Yes Detailed information on the hyperparameters and the implementation of the baselines can be found in App. C. A detailed overview of planning hyperparameters is given in App. A.1.2. Appendix A.1.3 provides specific values in Table 2.