O$n$ Learning Deep O($n$)-Equivariant Hyperspheres
Authors: Pavlo Melnyk, Michael Felsberg, Mårten Wadenbäck, Andreas Robinson, Cuong Le
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using synthetic and real-world data in n D, we experimentally verify our theoretical contributions and find that our approach is superior to the competing methods for O(n)-equivariant benchmark datasets (classification and regression), demonstrating a favorable speed/performance trade-off. |
| Researcher Affiliation | Academia | 1Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden. Correspondence to: Michael Felsberg <michael.felsberg@liu.se>. |
| Pseudocode | No | The paper describes methods and derivations but does not include structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | The code is available on Git Hub. |
| Open Datasets | Yes | For this experiment, we select the task of classifying the 3D skeleton data, presented and extracted by Melnyk et al. (2022) from the UTKinect Action3D dataset by Xia et al. (2012). |
| Dataset Splits | Yes | In each experiment, we train the models using the same hyperparameters and present the test-set performance of the models chosen based on their validation-set performance. |
| Hardware Specification | Yes | To measure inference time, we used an NVIDIA A100. |
| Software Dependencies | No | The paper states "All the models are implemented in Py Torch (Paszke et al., 2019)", but does not provide specific version numbers for PyTorch or any other software dependencies, which is required for reproducibility. |
| Experiment Setup | No | The paper mentions "we train the models using the same hyperparameters" and "We use the same training hyperparameters and evaluation setup as Ruhe et al. (2023)", indicating that hyperparameters were used consistently or adopted from another work, but it does not explicitly list the specific hyperparameter values (e.g., learning rate, batch size, epochs) within the paper itself. |