Group Equivariant Subsampling

Authors: Jin Xu, Hyunjik Kim, Thomas Rainforth, Yee Teh

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we compare the performance of GAEs with equivariant subsampling to their nonequivariant counterparts that use standard subsampling/upsampling in object-centric representation learning. We show that GAEs give rise to more interpretable representations that show better sample complexity and generalisation than their non-equivariant counterparts. In Appendix E.1, we show that we can also observe generalisation performance gains when using group equivariant subsampling for classification tasks.
Researcher Affiliation Collaboration 1 Department of Statistics, University of Oxford, UK. 2 Deep Mind, UK.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It provides mathematical definitions of operations.
Open Source Code Yes Our implementation is built upon open source projects Harris et al. (2020); Paszke et al. (2019); Yadan (2019); Weiler and Cesa (2019b); Engelcke et al. (2020); Hunter (2007); Waskom (2021). 3https://github.com/jinxu06/gsubsampling
Open Datasets Yes To demonstrate basic properties of GAEs and compare sample complexity under the single object scenario, we use Colored-d Sprite (Matthey et al., 2017) and a modification of Fashion MNIST (Xiao et al., 2017), where we first apply zero-padding to reach a size of 64 × 64, followed by random shifts, rotations and coloring. For multi-object datasets, we use Multi-d Sprites (Kabra et al., 2019) and CLEVR6 which is a variant of CLEVR (Johnson et al., 2017) with up to 6 objects.
Dataset Splits No The paper mentions training and testing data, but does not provide specific information about validation dataset splits (e.g., percentages or counts for a separate validation set).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No Our implementation is built upon open source projects Harris et al. (2020); Paszke et al. (2019); Yadan (2019); Weiler and Cesa (2019b); Engelcke et al. (2020); Hunter (2007); Waskom (2021). The paper lists software projects but does not provide specific version numbers for these dependencies.
Experiment Setup Yes See Appendix F and our reference implementation for more details on hyperparameters and data preprocessing.