An Intuitive Multi-Frequency Feature Representation for SO(3)-Equivariant Networks
Authors: Dongwon Son, Jaehyung Kim, Sanghyeon Son, Beomjoon Kim
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct our experiments on both SO(3)-invariant task and SO(3)-equivariant task. We use three tasks adopted from Deng et al. (2021): point cloud classification (invariant), segmentation (invariant), and point cloud completion (equivariant in the encoder, invariant in the decoder). Further, we evaluate them on three more SO(3) equivariant tasks: shape compression, the normal estimation, and point cloud registration, adopted from Mescheder et al. (2019), Puny et al. (2021), and Zhu et al. (2022). |
| Researcher Affiliation | Academia | Dongwon Son, Jaehyung Kim, Sanghyeon Son, Beomjoon Kim Department of AI, KAIST {dongwon.son,kimjaehyung,ssh98son,beomjoon.kim}@kaist.ac.kr |
| Pseudocode | Yes | Algorithm 1 CONSTRUCT J1, J2, J3 |
| Open Source Code | Yes | Also, our code is available at https://github.com/ FER-multifrequency-so3/FER-multifrequency-so3. |
| Open Datasets | Yes | Dataset: We use Shape Net consisting of 13 major classes, following the categorization in Deng et al. (2021). The Model Net40 (Wu et al., 2015) and Shape Net (Chang et al., 2015) datasets are used. Dataset: We use EGAD dataset (Morrison et al., 2020) comprised of 2281 shapes. |
| Dataset Splits | No | The paper mentions 'validation m Io U scores' and '9843 are designated for training and the remainder for testing' for ModelNet40, but does not provide specific percentages or methods for training/validation/test splits across all datasets, nor clear validation split details for ShapeNet or EGAD. |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU or CPU models, memory specifications, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions using the 'Numpy library (Harris et al. (2020))' but does not specify a version number for it or any other software dependencies crucial for reproducibility (e.g., PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We train the network for 300k iterations with a learning rate of 0.0001 and batch size of 64, selecting the models based on the best validation m Io U scores following Mescheder et al. (2019). We adopt all other hyperparameters from Deng et al. (2021). We adopt the default hyperparameters provided in Puny et al. (2021) including the number of channels. |