The Selective $G$-Bispectrum and its Inversion: Applications to $G$-Invariant Networks

Authors: Simon Mataigne, Johan Mathe, Sophia Sanborn, Christopher Hillar, Nina Miolane

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We prove desirable mathematical properties of the selective G-Bispectrum and demonstrate how its integration in neural networks enhances accuracy and robustness compared to traditional approaches, while enjoying considerable speeds-up compared to the full G-Bispectrum.
Researcher Affiliation Collaboration Simon Mataigne ICTEAM, UCLouvain Louvain-la-Neuve, Belgium simon.mataigne@uclouvain.be Johan Mathe Atmo San Francisco, CA johan@atmo.ai Sophia Sanborn Science San Francisco, CA sophiasanborn@gmail.com Christopher Hillar Algebraic San Francisco, CA hillarmath@gmail.com Nina Miolane UC Santa Barbara Santa Barbara, CA ninamiolane@ucsb.edu
Pseudocode Yes Algorithm 1 Selective G-Bispectrum on any finite group G
Open Source Code Yes Our implementation of the selective G-bispectrum layer is based on the gtc-invariance repository, implementing the G-CNN with G-convolution and G-TC layer [28] and relying itself on the escnn library [3, 32]. The implementations related to this section can be found at the g-invariance repository.
Open Datasets Yes We run extensive experiments on the MNIST [23] and EMNIST [5] datasets to evaluate how each invariant layer (Max G-pooling, G-TC, selective G-Bispectrum) impacts accuracy and speed on classification tasks.
Dataset Splits Yes We obtain transformed versions of the datasets G-MNIST/EMNIST by applying a random action g G on each image in the original dataset. ... We assess the accuracy by averaging the validation accuracy over 10 runs.
Hardware Specification Yes The experiments a performed using 8 cores of a NVIDIA A30 GPU.
Software Dependencies No The paper mentions relying on the 'escnn library [3, 32]' and 'gtc-invariance repository' but does not specify version numbers for these software components or other dependencies.
Experiment Setup Yes The neural network architecture is composed of a G-convolution, a G-invariant layer, and finally a Multi-Layer-Perceptron (MLP), itself composed of three fully connected layers with Re LU nonlinearity. Finally, a fully connected linear layer is added to perform classification. The MLP s widths are tuned to match the number of parameters across each neural network model. The details are given in Appendix G.