PDSketch: Integrated Domain Programming, Learning, and Planning
Authors: Jiayuan Mao, Tomás Lozano-Pérez, Josh Tenenbaum, Leslie Kaelbling
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally verify the efficiency and effectiveness of PDSketch in two domains: Baby AI, an 2D grid-world environment that focuses on navigation, and Painting Factory, a simulated table-top robotic environment that paints and moves blocks. |
| Researcher Affiliation | Academia | Jiayuan Mao1 Tomás Lozano-Pérez1 Joshua B. Tenenbaum1,2,3 Leslie Pack Kaelbling1 1 MIT Computer Science & Artificial Intelligence Laboratory 2 MIT Department of Brain and Cognitive Sciences 3 Center for Brains, Minds and Machines |
| Pseudocode | No | The paper provides code-like examples for PDSketch definitions (e.g., Figure 2, 4, 5) but does not include any blocks explicitly labeled as "Pseudocode" or "Algorithm". |
| Open Source Code | Yes | 3. (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] see the supplementary material. |
| Open Datasets | Yes | Baby AI [Chevalier-Boisvert et al., 2019] is an image-based 2D grid-world environment |
| Dataset Splits | No | The paper mentions "training data" and "test generalization" but does not explicitly describe train/validation/test splits or specific percentages for them. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments in the main text. The ethics checklist mentions details might be in the supplementary material, but this information is not present in the provided paper excerpt. |
| Software Dependencies | No | The paper mentions software frameworks like "Tensor Flow" and "Py Torch" but does not provide specific version numbers for these or other ancillary software components used in the experiments. |
| Experiment Setup | No | The paper mentions training on specific environments and finetuning for "one epoch", but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, optimizer settings) in the main text. |