An Explicit Frame Construction for Normalizing 3D Point Clouds

Authors: Justin Baker, Shih-Hsin Wang, Tommaso De Fernex, Bao Wang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, our algorithm outperforms existing methods in effectiveness and generalizability across diverse benchmark datasets. We validate our approach with a comprehensive comparison of our ASymmetric Unit Normalization (ASUN) with existing and publicly available methods on benchmark datasets.
Researcher Affiliation Academia 1Department of Mathematics, University of Utah, Salt Lake City, Utah, USA 2Scientific Computing and Imaging (SCI) Institute, University of Utah, Salt Lake City, Utah, USA.
Pseudocode No The paper describes methods in a step-by-step manner (e.g., in Section 4 and 4.1), but these are textual descriptions rather than structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github. com/Utah-Math-Data-Science/ alignment.
Open Datasets Yes The datasets consist of QM9 (Ramakrishnan et al., 2014) molecular data, Model Net40 (Wu et al., 2015) CAD point cloud data, and n-body (Satorras et al., 2021) point cloud trajectories.
Dataset Splits Yes Then the training is performed on the 3000 trajectories, with 2000 unique trajectories used for validation and 2000 unique trajectories used for testing.
Hardware Specification Yes Also, we note that e SCN exceeds the memory capacity of the A100 GPU.
Software Dependencies No The paper mentions using the Adam optimizer and various GNN models (Sch Net, EGNN, MACE, e SCN) but does not provide specific software versions for these or other dependencies like Python, PyTorch, or TensorFlow.
Experiment Setup Yes The Wasserstein distance (Villani et al., 2009), i.e. the earth mover distance (EMD), is used to compute the loss between the initial normalization and the normalization after perturbation. Perturbation is applied a total of 100 times per molecular structure. The learning-based AE model is trained on the QM9 positional data following the procedure of Winter et al. (2022). In particular, the RMSE of the predicted positional data is minimized using the Adam optimizer. For this task, we use the architectures, training procedures, and hyperparameters as described in (Satorras et al., 2021).