Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs

Authors: Pim De Haan, Maurice Weiler, Taco Cohen, Max Welling

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 EXPERIMENTS, Figure 3: Test errors for MNIST digit classification on embedded meshes., Table 2: Results of FAUST shape correspondence.
Researcher Affiliation Collaboration Pim de Haan Qualcomm AI Research University of Amsterdam, Maurice Weiler QUVA Lab University of Amsterdam, Taco Cohen Qualcomm AI Research, Max Welling Qualcomm AI Research University of Amsterdam
Pseudocode Yes Algorithm 1 Gauge Equivariant Mesh CNN layer
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology or links to a code repository.
Open Datasets Yes We first investigate how Gauge Equivariant Mesh CNNs perform on, and generalize between, different mesh geometries. For this purpose we conduct simple MNIST digit classification experiments on embedded rectangular meshes of 28 28 vertices., As a second experiment, we perform non-rigid shape correspondence on the FAUST dataset (Bogo et al., 2014), following Masci et al. (2015) 3 .
Dataset Splits No For MNIST: 'For each of the considered settings we generate 32 different train and 32 test geometries.' For FAUST: 'The data consists of 100 meshes of human bodies in various positions, split into 80 train and 20 test meshes.' The paper does not explicitly provide information on validation dataset splits.
Hardware Specification No These experiments were executed on QUVA machines. (This is too vague and does not provide specific hardware details like GPU/CPU models or memory.)
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) used for the experiments.
Experiment Setup Yes The architecture transforms the vertices XY Z coordinates (of type 3ρ0), via 6 convolutional layers to features 64ρ0, with intermediate features 16(ρ0 ρ1 ρ2), with residual connections and the Regular Nonlinearity with N = 5 samples. Afterwards, we use two 1 1 convolutions with Re LU to map first to 256 and then 6980 channels, after which a softmax predicts the registration probabilities. The 1 1 convolutions use a dropout of 50% and 1E-4 weight decay. The network is trained with a cross entropy loss with an initial learning rate of 0.01, which is halved when training loss reaches a plateau.