Lie Group Decompositions for Equivariant Neural Networks
Authors: Mircea Mironenco, Patrick Forré
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task, outperforming previous proposals. ... For all experiments we use a Res Net-style architecture... |
| Researcher Affiliation | Academia | Mircea Mironenco AI4Science Lab, AMLab Informatics Institute University of Amsterdam mircea.mironenco@gmail.com Patrick Forr e AI4Science Lab, AMLab Informatics Institute University of Amsterdam p.d.forre@uva.nl |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/mirceamironenco/rgenn. |
| Open Datasets | Yes | We evaluate our model on a benchmark affine-invariant image classification task employing the aff NIST dataset2. ... The experimental setup involves training on the standard set of 50000 non-transformed MNIST images (padded to 40 40)... |
| Dataset Splits | No | The paper mentions training and test sets but does not provide specific details about a validation dataset split or how it was used. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions specific software components like "Adam optimizer" and "SIREN networks" but does not provide their specific version numbers. |
| Experiment Setup | Yes | All experiments will use the same Res Net-like architecture He et al. (2016)... We set ω0 = 10 for all experiments. We use 42 output channels in both the lifting and cross-correlation layers. Each SIREN network consists of 2 layers of size 60. ... The models are trained for 100 epochs, with a batch size of 128, and the Adam optimizer of Kingma & Ba (2014) with a standard learning rate of 0.0001. |