Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary
Authors: Masataro Asai, Alex Fukunaga
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Lat Plan using image-based versions of 3 planning domains: 8-puzzle, Towers of Hanoi and Lights Out. |
| Researcher Affiliation | Academia | Masataro Asai, Alex Fukunaga Graduate School of Arts and Sciences The University of Tokyo |
| Pseudocode | No | The paper describes algorithmic steps but does not include formal pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | Latplan code is available on Github. |
| Open Datasets | Yes | MNIST 8-puzzle is an image-based version of the 8-puzzle, where tiles contain hand-written digits (0-9) from the MNIST database (Le Cun et al. 1998). |
| Dataset Splits | No | The paper mentions training various components (SAE, AAE, AD, SD) but does not specify explicit training, validation, and test splits for the data used in training the models. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory specifications). |
| Software Dependencies | No | The paper mentions using "a modiļ¬ed version of Fast Downward (Helmert 2006)" but does not specify its version number or any other software dependencies with their respective versions. |
| Experiment Setup | No | The paper describes the general training processes (e.g., minimizing reconstruction loss, annealing Gumbel-Softmax temperature) but does not provide specific hyperparameter values like learning rates, batch sizes, or explicit training schedules. |