Frame Averaging for Invariant and Equivariant Network Design
Authors: Omri Puny, Matan Atzmon, Edward J. Smith, Ishan Misra, Aditya Grover, Heli Ben-Hamu, Yaron Lipman
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the practical effectiveness of FA on several applications including point cloud normal estimation, beyond 2-WL graph separation, and n-body dynamics prediction, achieving state-of-the-art results in all of these benchmarks. |
| Researcher Affiliation | Collaboration | Omri Puny 1 Matan Atzmon 1 Heli Ben-Hamu 1 Ishan Misra2 Aditya Grover2 Edward J. Smith2 Yaron Lipman2 1Weizmann Institute of Science 2Facebook AI Research |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It describes the framework and methods in prose and mathematical equations. |
| Open Source Code | No | The paper does not provide concrete access to source code. It does not contain a specific repository link or an explicit code release statement. |
| Open Datasets | Yes | We used the ABC dataset (Koch et al., 2019) that contains 3 collections (10k, 50k, and 100k models each) of Computer-Aided Design (CAD) models. ... We use two datasets: GRAPH8c (Balcilar et al., 2021) that consists of all non-isomorphic, connected 8 node graphs; and EXP (Abboud et al., 2021) that consists of 3-WL distinguishable graphs that are not 2-WL distinguishable. ... The dataset, created in (Satorras et al., 2021; Fuchs et al., 2020), consists of a collection of n = 5 particles systems |
| Dataset Splits | Yes | We follow the protocol of the benchmark suggested in Koch et al. (2019), and quantitatively measure normal estimation quality via 1 (n T ˆn)2, with n being the ground truth normal and ˆn the normal prediction. We used the same random train/test splits from Koch et al. (2019). |
| Hardware Specification | Yes | Training was done on a single Nvidia V-100 GPU, using PYTORCH deep learning framework (Paszke et al., 2019). ... Training was done on a single Nvidia RTX-8000 GPU, using PYTORCH deep learning framework. ... Training was done on a single Nvidia RTX-6000 GPU, using PYTORCH deep learning framework. |
| Software Dependencies | Yes | Training was done on a single Nvidia V-100 GPU, using PYTORCH deep learning framework (Paszke et al., 2019). ... We trained our networks using the ADAM (Kingma & Ba, 2014) optimizer |
| Experiment Setup | Yes | We trained our networks using the ADAM (Kingma & Ba, 2014) optimizer, setting the batch size to 32 and 16 for Point Net and DGCNN respectively. We set a fixed learning rate of 0.001. All models were trained for 250 epochs. ... We followed the protocol from (Balcilar et al., 2021) and trained our model with batch size 100 for 200 epochs. The learning rate was set to 0.001 and did not change during training. For optimization we used the ADAM optimizer. ... We followed the protocol from (Satorras et al., 2021) and trained our model with batch size 100 for 10000 epochs. The learning rate was set to 0.001 and did not changed during training. For optimization we used the ADAM optimizer. |