Grounded Object-Centric Learning
Authors: Avinash Kori, Francesco Locatello, Fabio De Sousa Ribeiro, Francesca Toni, Ben Glocker
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical study evaluates COSA on a variety of popular object discovery (Table 1, 2) and visual reasoning benchmarks (Table 3). |
| Researcher Affiliation | Collaboration | Avinash Kori , Francesco Locatello , Fabio De Sousa Ribeiro , Francesca Toni , and Ben Glocker Imperial College London, Institute of Science and Technology Austria a.kori21@imperial.ac.uk Work done when the author was a part of AWS. |
| Pseudocode | Yes | Algorithm 1 COnditional Slot Attention (COSA). |
| Open Source Code | Yes | All experimental scripts and scripts to generate datasets is available on Git Hub at https://github.com/koriavinash1/Co SA. |
| Open Datasets | Yes | In this work, we use multiple datasets for every case studies. We make use of publically available datasets that are released under MIT Licence and that are open for all research works. For details on the various datasets we used, please refer to App. C. |
| Dataset Splits | Yes | CLEVR... consists of 70000, 15000, and 15000 set of training, validation, and testing images respectively. |
| Hardware Specification | Yes | We run all our experiments on a cluster with Nvidia Telsa T4 16GB GPU card with Intel(R) Xeon(R) Gold 6230 CPU. |
| Software Dependencies | No | The paper mentions using an 'adam optimizer' and refers to a base implementation from a GitHub repository, but does not list specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | We train all the models with adam optimizer with a learning rate of 0.0004, batch size of 16, with early stopping with the patience of 5, and for a maximum of min(40epochs, 40000 steps). We use linear learning rate warmup for 10000 steps with a decay rate of 0.5 followed by reduce on the plateau learning rate scheduler. |