A General Framework for Robust G-Invariance in G-Equivariant Networks

Authors: Sophia Sanborn, Nina Miolane

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments demonstrate improved scores in classification accuracy in traditional benchmark datasets as well as improved adversarial robustness. We examine the performance of the G-TC over Max G-Pooling in G-Equivariant Networks defined on these groups and trained on G-Invariant classification tasks.
Researcher Affiliation Academia Sophia Sanborn sanborn@ucsb.edu Nina Miolane ninamiolane@ucsb.edu Department of Electrical and Computer Engineering UC Santa Barbara
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes The code is publicly available at https://github.com/sophiaas/gtc-invariance.
Open Datasets Yes For the groups SO(2) and O(2) acting on R2, we use the MNIST dataset of handwritten characters [37], and for the groups SO(3) and O(3) acting on R3, we use the voxelized Model Net10 database of 3D objects [52].
Dataset Splits Yes A random 20% of the training dataset is set aside for model validation and is used to tune hyperparameters. The remaining 80% is used for training.
Hardware Specification No The paper does not specify the hardware used for experiments.
Software Dependencies No The paper mentions building upon the ESCNN library, but does not provide specific version numbers for software dependencies.
Experiment Setup Yes Full training details including hyperparameters are provided in Appendix G. All models are trained with a cross-entropy loss, using the Adam optimizer, a learning rate of 0.00005, weight decay of 0.00001, betas of [0.9, 0.999], epsilon of 10-8, a reduce-on-plateau learning rate scheduler with a factor of 0.5, patience of 2 epochs, and a minimum learning rate of 0.0.0001. Each model is trained with four random seeds [0, 1, 2, 3], and results are averaged across seeds.