Equivariance versus Augmentation for Spherical Images

Authors: Jan Gerken, Oscar Carlsson, Hampus Linander, Fredrik Ohlsson, Christoffer Petersson, Daniel Persson

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our models are trained and evaluated on single or multiple items from the MNIST or Fashion MNIST dataset projected onto the sphere.
Researcher Affiliation Collaboration 1Department of Mathematical Sciences, Chalmers University of Technology, Gothenburg, Sweden, 6Zenseact, Gothenburg, Sweden.
Pseudocode No The paper contains mathematical equations and descriptions of model architectures, but no explicitly labeled pseudocode or algorithm blocks were found.
Open Source Code Yes The equivariant spherical networks used in the experiments are available at https://github. com/Jan EGerken/sem_seg_s2cnn.
Open Datasets Yes Our models are trained and evaluated on single or multiple items from the MNIST or Fashion MNIST dataset projected onto the sphere.
Dataset Splits Yes For validation, we generated datasets in the same way as for training, but sampling 10,000 data points from the test split of the MNIST dataset.
Hardware Specification Yes on an Nvidia T4 16GB GPU.
Software Dependencies No The paper mentions software components like 'Adam' (optimizer), 'Re LU nonlinearity', 'pytorch einsum', and 'CUDA implementation', but does not provide specific version numbers for any of these or other software dependencies.
Experiment Setup Yes All models are trained with batch size 32 and learning rate 10 3 using Adam on a segmentation task with one MNIST digit on the sphere and 60k rotated training samples until convergence and then evaluated. In all experiments, we use early stopping on the non-background m Io U metric and a maximum of 100 epochs.