SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition

Authors: Rishabh Kabra, Daniel Zoran, Goker Erdogan, Loic Matthey, Antonia Creswell, Matt Botvinick, Alexander Lerchner, Chris Burgess

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate these capabilities, as well as the model s performance in terms of view synthesis and instance segmentation, across three procedurally generated video datasets.
Researcher Affiliation Industry 1Deep Mind, 2Wayve, Work done at Deep Mind {rkabra, danielzoran, gokererdogan, lmatthey, tonicreswell, botvinick, lerchner}@deepmind.com, chrisburgess@wayve.ai
Pseudocode No The paper describes the architecture and generative process but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a specific repository link or an explicit statement about the release of the source code for the SIMONe model described.
Open Datasets Yes Our results are based on three procedurally generated video datasets of multi-object scenes. In increasing order of difficulty, they are: Objects Room 9 [44], CATER (moving camera) [45], and Playroom [46].
Dataset Splits No The paper mentions evaluating on 'held-out data' and 'unobserved views' but does not specify concrete percentages or sample counts for training, validation, and test dataset splits, nor does it refer to standard predefined splits with sufficient detail for reproducibility.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments (e.g., GPU/CPU models, memory, or cloud instance types).
Software Dependencies No The paper discusses the models and frameworks used but does not provide specific software dependencies (e.g., library names with version numbers) needed to replicate the experiment.
Experiment Setup Yes For all results in the paper, we set I and J to 8 each, and K = 16. ... α is generally set to 1, but available to tweak in case the scale of βo and βf is too small to be numerically stable. Unless explicitly mentioned, we set βo = βf. See Appendix A.3 for details.