A Program to Build E(N)-Equivariant Steerable CNNs

Authors: Gabriele Cesa, Leon Lang, Maurice Weiler

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate its generality, we instantiate our method on a variety of isometry groups acting on the Euclidean space R3. Our framework allows us to build E(3) and SE(3)-steerable CNNs like previous works, but also CNNs with arbitrary G O(3)-steerable kernels. For example, we build 3D CNNs equivariant to the symmetries of platonic solids or choose G = SO(2) when working with 3D data having only azimuthal symmetries. We compare these models on 3D shapes and molecular datasets, observing improved performance by matching the model s symmetries to the ones of the data.
Researcher Affiliation Collaboration Gabriele Cesa Qualcomm AI Research University of Amsterdam gcesa@qti.qualcomm.com Leon Lang University of Amsterdam l.lang@uva.nl Maurice Weiler University of Amsterdam m.weiler.ml@gmail.com
Pseudocode Yes Algorithm 1 Generate G-Steerable basis on space X
Open Source Code Yes Finally, we implement the program described in this work as a general purpose library based on Py Torch at github.com/QUVA-Lab/escnn.
Open Datasets Yes To emphasize our method is not limited to R3, we include a simple experiment with 2D images. In the rest of the section, we compare different model designs on two volumetric datasets: Model Net10 (Wu et al., 2015) (and a rotated version of it) and LBA (Townshend et al., 2020).
Dataset Splits Yes For each model, we independently performed simple tuning of batch size, learning rate and weight decay by evaluating them on the validation set. (Section H.4 and H.5) and To make our results comparable with Townshend et al. (2020), we use the same train, validation and test split. (Section H.8)
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No Finally, we implement the program described in this work as a general purpose library based on Py Torch at github.com/QUVA-Lab/escnn. (Section 1, last paragraph). No specific version numbers for PyTorch or other libraries are provided.
Experiment Setup Yes We train both models using Adam. We performed simple tuning of batch size, learning rate and weight decay of each model independently by evaluating them on the validation set. (Section H.4) and The number of channels in each residual block is N = [96, 192, 288, 288, 576, 576]. (Section H.4) and We train all models using Adam. For each model, we independently performed simple tuning of batch size, learning rate and weight decay (Section H.5)