Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Affine Steerable Equivariant Layer for Canonicalization of Neural Networks

Authors: Yikang Li, Yeqing Qiu, Yuxuan Chen, Zhouchen Lin

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on image classification tasks involving group transformations to validate the steerable Equivar Layer in the role of a canonicalization function, demonstrating its effectiveness over data augmentation. [...] We conduct experiments on the original MNIST dataset and its transformed version, MNISTGL+(2), which undergoes random transformations from GL+(2) group [...] Each experiment is repeated five times with independent random seeds, and we report the mean standard deviation of test error in Table 1.
Researcher Affiliation Academia 1State Key Lab of General AI, School of Intelligence Science and Technology, Peking University 2The Chinese University of Hong Kong, Shenzhen 3Shenzhen Research Institute of Big Data 4Khoury College of Computer Sciences, Northeastern University 5Institute for Artificial Intelligence, Peking University 6Pazhou Laboratory (Huangpu), Guangzhou, Guangdong, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It provides mathematical formulations and definitions but no procedural algorithms in a code-like format.
Open Source Code Yes 1The code is available at https://github.com/Liyk127/Equivar Layer.
Open Datasets Yes Our experiments focus on image classification tasks using the MNIST dataset and its transformed versions under different group actions. [...] We utilize a Res Net50 model pre-trained on the Image Net-1K dataset as the prediction network for our experiments [...] As shown in Tables 5, 6, and 7, we include additional experiments on Fashion-MNIST (Xiao et al., 2017) and its transformed variants.
Dataset Splits Yes In all experiments in Section 4, each training dataset consists of 50,000 images, and each test dataset consists of 10,000 images. [...] We generate 2310 sequences with different initial conditions and split the dataset into a 5:2 ratio, using 1650 sequences for training (corresponding to 79,200 data points) and 660 sequences for testing (31,680 data points).
Hardware Specification Yes All experiments are conducted on an NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions optimizers like SGD and Adam W, and frameworks such as ResNet, but does not provide specific version numbers for any key software components or libraries (e.g., Python, PyTorch, CUDA).
Experiment Setup Yes The model is fine-tuned using SGD with a learning rate of 10 3, decay of 5 10 4, and momentum of 0.9 for a duration of 50 epochs. The learning rate scheduler reduces the learning rate at one-third and one-half of the total epochs, multiplying it by a factor of 0.1 at each milestone. The batch size for datasets is set to 128. [...] The model is trained using an Adam W optimizer with a learning rate of 2 10 3 and employs a cosine annealing scheduler for 200 epochs. The batch size for the datasets is set to 128.