Provably Learning Object-Centric Representations

Authors: Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius Von Kügelgen, Wieland Brendel

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically validate our results through experiments on synthetic data. Finally, we provide evidence that our theory holds predictive power for existing object-centric models by showing a close correspondence between models compositionality and invertibility and their empirical identifiability.
Researcher Affiliation Academia 1MPI for Intelligent Systems, T ubingen 2T ubingen AI Center, T ubingen 3University of T ubingen, T ubingen, Germany 4Department of Engineering, University of Cambridge, Cambridge, United Kingdom.
Pseudocode No The paper does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes 1Code/Website: brendel-group.github.io/objects-identifiability
Open Datasets Yes We generate image data using the Spriteworld renderer (Watters et al., 2019).
Dataset Splits Yes We train on 75,000 samples and use 6,000 and 5,000 for validation and test sets, respectively.
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or TPU versions) used for running its experiments.
Software Dependencies No The paper mentions using "PyTorch (Paszke et al., 2019)" but does not specify a precise version number for this or any other software dependency, which is required for reproducibility.
Experiment Setup Yes We train for 100 epochs with the Adam optimizer (Kingma & Ba, 2015) on batches of 64 with an initial learning rate of 10 3, which we decay by factor of 10 after 50 epochs.