Interpreting Equivariant Representations

Authors: Andreas Abildtrup Hansen, Anna Calissano, Aasa Feragen

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the effect of the suggested tools via widely encountered group actions on two widely used model classes: 1) A permutation equivariant variational autoencoder (VAE) representing molecular graphs acted on by node permutations, where we obtain isometric invariant representations of the data, and 2) an equivariant representations of a rotation-invariant image classifier, where we showcase random invariant projections as a general and efficient tool for providing expressive invariant representations.
Researcher Affiliation Academia 1Department of Visual Computing, Technical University of Denmark, Kgs. Lyngby, Denmark 2INRIA d Universit e Cˆote d Azur, France 3Now at: Department of Mathematics, Imperial College London, London, England.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes Dataset: The QM9 dataset (Ramakrishnan et al., 2014; Ruddigkeit et al., 2012) consists of approx. 130.000 stable, small molecules, using 80%/10%/10% for training/validation/testing.
Dataset Splits Yes Dataset: The QM9 dataset (Ramakrishnan et al., 2014; Ruddigkeit et al., 2012) consists of approx. 130.000 stable, small molecules, using 80%/10%/10% for training/validation/testing.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like the 'Python Geometric library (Fey & Lenssen, 2019)', 'pytorch library(Paszke et al., 2017)', and 'ESCNN library provided by (Weiler & Cesa, 2019; Cesa et al., 2022b)' but does not provide specific version numbers for these software components or programming languages.
Experiment Setup Yes Training details: The model was trained using the negative evidence lower bound (ELBO) as is standard for VAEs. A learning rate of 0.0001 and a batch-size of 32 was chosen. The model was trained for 1000 epochs. The QM9 dataset was obtained through the Python Geometric library (Fey & Lenssen, 2019). [...] Training details: The model was trained using a cross-entropy loss. A learning rate of 0.01 and a batch-size of 128 was chosen. The model was trained for 100 epochs. The MNIST dataset was obtained through the pytorch library(Paszke et al., 2017).