Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
In search of projectively equivariant networks
Authors: Georg Bökman, Axel Flinth, Fredrik Kahl
TMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Projective equivariance is showcased in two simple experiments. Code for the experiments is provided at github.com/usinedepain/projectively_equivariant_deep_nets. ... In Section 3.2, we describe this application, and perform some proof-of-concept experiments on modified MNIST data. ... We train 30 models of each type. The evolution of the test and training accuracy is depicted in Figure 3, along with confidence intervals containing 80% of the runs. ... In Appendix D, similar modifications of the CIFAR10 dataset are considered. ... We present results in Figure 5. |
| Researcher Affiliation | Academia | Georg Bökman EMAIL Department of Electrical Engineering Chalmers University of Technology Axel Flinth axel.flinth@umu.se Department of Mathematics and Mathematical Statistics Umeå University Fredrik Kahl EMAIL Department of Electrical Engineering Chalmers University of Technology |
| Pseudocode | No | The paper describes methods and architectures in detail (e.g., in Section 3 and 4), but it does not include any distinct pseudocode blocks or algorithm listings formatted as code. |
| Open Source Code | Yes | Code for the experiments is provided at github.com/usinedepain/projectively_equivariant_deep_nets |
| Open Datasets | Yes | We modify the MNIST dataset (Le Cun et al., 1998), by adding an additional class... In Appendix D, similar modifications of the CIFAR10 dataset are considered. |
| Dataset Splits | No | The paper mentions using training and test sets for MNIST/CIFAR10 and prototypes + noise for training and rotated prototypes + noise for evaluation in the point cloud task. However, it does not provide specific percentages, sample counts, or explicit predefined split references for these modified datasets. For example, it mentions 'The evolution of the test and training accuracy is depicted in Figure 3' but without detailing the exact split methodology. |
| Hardware Specification | Yes | All nets were trained... on single A40 GPUs, in parallell on a high performance computing cluster. |
| Software Dependencies | No | The paper mentions "Adam optimiser with default Py Torch settings" and that "The implementation is inspired by the e3nn package (Geiger & Smidt, 2022)". However, specific version numbers for PyTorch or e3nn are not provided. |
| Experiment Setup | Yes | We train 30 models of each type. ... All nets were trained using the Adam optimiser with default Py Torch settings, for 300 epochs with learning rate 10 2. ... The loss used is L2-loss on the regressed spinor... We use sigmoid-gated nonlinearities (Weiler et al., 2018) for non-scalar features and Ge LU for scalar features. ... A batch size of 32 was used for all experiments. |