Learning with Language-Guided State Abstractions
Authors: Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore Sumers, Thomas L. Griffiths, Jacob Andreas, Julie Shah
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on simulated robotic tasks show that LGA yields state abstractions similar to those designed by humans, but in a fraction of the time, and that these abstractions improve generalization and robustness in the presence of spurious correlations and ambiguous specifications. |
| Researcher Affiliation | Academia | Andi Peng MIT Ilia Sucholutsky Princeton Belinda Z. Li MIT Theodore R. Sumers Princeton Thomas L. Griffiths Princeton Jacob Andreas MIT Julie A. Shah MIT |
| Pseudocode | No | The paper does not contain any pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about open-sourcing the code or a link to a code repository. |
| Open Datasets | Yes | We generate robotic control tasks from VIMA (Jiang et al., 2022), a vision-based manipulation environment. |
| Dataset Splits | No | The paper mentions 'training distributions' and 'test distributions' but does not explicitly describe a validation set or specific train/validation/test splits. |
| Hardware Specification | Yes | All computation was done on two NVIDIA Ge Force RTX 3090 GPUs. ... We illustrate the utility of the learned abstractions on mobile manipulation tasks with a Spot robot. |
| Software Dependencies | No | The paper mentions software like 'Sentence-BERT' and 'Py Bullet' but does not provide specific version numbers for these or other dependencies. |
| Experiment Setup | Yes | We train all networks to convergence for a maximum of 750 epochs. |