Toward Compositional Generalization in Object-Oriented World Modeling
Authors: Linfeng Zhao, Lingzhi Kong, Robin Walters, Lawson L.S. Wong
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6. Experiments We study how compositional generalization is achieved in practice, by using Object Library and generalization metrics. We provide additional results in Appendix G. 6.1. Experimental Setup 6.2. Results and analysis We compare all methods on the Basic Shapes environment in terms of their generalization performance and scalability, shown in Table 1. |
| Researcher Affiliation | Academia | Linfeng Zhao 1 Lingzhi Kong 1 Robin Walters 1 Lawson L.S. Wong 1 1Khoury College of Computer Sciences, Northeastern University, MA. Correspondence to: Linfeng Zhao <zhao.linf@northeastern.edu>. |
| Pseudocode | No | The paper describes methods in text and uses diagrams but does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | More resources are available under http://lfzhao.com/oowm. |
| Open Datasets | No | We designed two instances of the Object Library, OBJLIB, environment. They are built upon the 2-D shape version of the Block Pushing environment (Kipf et al., 2019). |
| Dataset Splits | No | We follow the setup in Kipf et al. (2019), using 1K episodes for training (...) and 10K episodes of length 10 for evaluation. |
| Hardware Specification | Yes | All models are trained on Nvidia Ge Force RTX 2080 Ti GPU with 11GB memory. |
| Software Dependencies | No | The paper mentions optimizers (Adam) and implies frameworks (e.g., PyTorch, given the context of deep learning models like GNNs and VAEs), but it does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | We use Adam optimizer with a learning rate of 5e-4 and batch size of 1024, margin of hinge loss γ = 1.0, same as the original paper. |