Spherical Channels for Modeling Atomic Interactions

Authors: Larry Zitnick, Abhishek Das, Adeesh Kolluru, Janice Lan, Muhammed Shuaibi, Anuroop Sriram, Zachary Ulissi, Brandon Wood

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate state-of-the-art results on the large-scale Open Catalyst 2020 dataset in both energy and force prediction for numerous tasks and metrics. We present results on the Open Catalyst 2020 (OC20) dataset [6] that is released under a Creative Commons Attribution 4.0 License. OC20 contains over 130M training examples for the task of predicting atomic energies and forces for catalysts used in renewable energy storage and other important applications [48]. This dataset is a popular benchmark for the ML community. We begin by comparing results across all tasks on the test set. Next, we show numerous ablation studies comparing model variations on the smaller OC20 2M dataset.
Researcher Affiliation Collaboration C. Lawrence Zitnick1, Abhishek Das1, Adeesh Kolluru2, Janice Lan1, Muhammed Shuaibi2, Anuroop Sriram1, Zachary Ulissi2, Brandon Wood1 1 Fundamental AI Research at Meta AI 2 Carnegie Mellon University
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks. It uses block diagrams to illustrate processes.
Open Source Code Yes All model code will be open sourced with an MIT license in the Open Catalyst Github repo.
Open Datasets Yes We present results on the Open Catalyst 2020 (OC20) dataset [6] that is released under a Creative Commons Attribution 4.0 License.
Dataset Splits Yes The validation results are averaged across the four OC20 Validation set splits. Training curves for SCN and Gem Net models for force MAEs per epoch evaluated on a 30k subset of the validation ID dataset.
Hardware Specification No The paper mentions training on '16 GPUs' and '32 GPUs' and discusses '32GB RAM' in relation to batch size, but does not specify the exact model or type of GPUs or other hardware components (e.g., CPU, specific memory details).
Software Dependencies No The paper mentions using 'the code provided by Euclidean neural networks (e3nn)' and 'Py Torch s Automatic Mixed Precision (AMP)' but does not provide specific version numbers for these software components.
Experiment Setup Yes Unless otherwise stated C = 128, K = 16, H = 1024, E = 128, and for Fe in message passing only orders m [ 1, 1] are used. All forces are estimated directly as an output of the network, unless stated that the energy conserving gradient-based approach was used. During training, the coefficients for the force and energy losses are 100 and 2 respectively. Training is performed using the Adam W optimizer [29] with a learning rate of 0.0004. The effective batch size is increased using data parallelism and Py Torch s Automatic Mixed Precision (AMP). All model code will be open sourced with an MIT license in the Open Catalyst Github repo.