Unsupervised Learning of Group Invariant and Equivariant Representations

Authors: Robin Winter, Marco Bertolini, Tuan Le, Frank Noe, Djork-Arné Clevert

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the validity and the robustness of our approach in a variety of experiments with diverse data types employing different network architectures.In this section we present differnt experiments for the various groups discussed in Section 3.
Researcher Affiliation Collaboration Robin Winter Bayer AG Freie Universität Berlin robin.winter@bayer.com Marco Bertolini Bayer AG marco.bertolini@bayer.com Tuan Le Bayer AG Freie Universität Berlin tuan.le2@bayer.com Frank Noé Freie Universität Berlin Microsoft Research frank.noe@fu-berlin.de Djork-Arné Clevert Bayer AG djork-arne.clevert@bayer.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Source code for the different implementations available at https://github.com/jrwnter/giae.
Open Datasets Yes In the first experiment, we train an SO(2)-invariant autoencoder on the original (non-rotated) MNIST dataset and validate the trained model on the rotated MNIST dataset (ref. mni) which consists of randomly rotated versions of the original MNIST dataset.Rotated MNIST. https://sites.google.com/a/lisa.iro.umontreal.ca/public_ static_twiki/variations-on-the-mnist-digits. [Online; accessed 05-January-2021].We showcase our learning framework on real-world data by autoencoding the atom types and geometries of small molecules from the QM9 database Ramakrishnan et al. (2014).
Dataset Splits Yes We randomly sampled 1.000.000 different sets for training and 100.000 for the final evaluation with N = 20, 30, 40, 100, respectively, removing all permutation equivariant sets (i.e., there are no two sets that are the same up to a permutation). The classifier based on our rotation-invariant embeddings achieved an accuracy of 0.81 while the classifier based on the non-invariant embeddings achieved an accuracy of only 0.63.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU or CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using specific network architectures like 'SO(2)-Steerable Convolutional Neural Networks' and '3D Steerable CNNs', but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes For more details about the network architecture and training, we refer to Appendix B. We use a batch size of 128 and train the model for 50 epochs using the Adam optimizer with a learning rate 10^-3.