Efficient Approximations of Complete Interatomic Potentials for Crystal Property Prediction
Authors: Yuchao Lin, Keqiang Yan, Youzhi Luo, Yi Liu, Xiaoning Qian, Shuiwang Ji
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on the JARVIS and Materials Project benchmarks for evaluation. Results show that the use of interatomic potentials and complete interatomic potentials leads to consistent performance improvements with reasonable computational costs. |
| Researcher Affiliation | Academia | 1Department of Computer Science & Engineering, Texas A&M University, College Station, TX, USA 2Department of Computer Science, Florida State University, Tallahassee, FL, USA 3Department of Electrical & Computer Engineering, Texas A&M University, College Station, TX, USA. Correspondence to: Shuiwang Ji <sji@tamu.edu>. |
| Pseudocode | No | The paper describes its methods through mathematical equations and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available as part of the AIRS library (https://github.com/divelab/AIRS). |
| Open Datasets | Yes | We conduct experiments on two material benchmark datasets, including The Materials Project and JARVIS. To make the comparisons fair, we follow the settings of the previous state-of-the-art (SOTA) Matformer (Yan et al., 2022) for all tasks since they retrain all baselines using the same dataset settings and the same data splits from ALIGNN (Choudhary & De Cost, 2021). |
| Dataset Splits | Yes | We follow Matformer (Yan et al., 2022) and use the same training, validation, and test splits for all these tasks, and also use their retrained baseline results. |
| Hardware Specification | Yes | For all tasks on two benchmark datasets, we use one NVIDIA RTX A6000 48G GPU as well as Intel Xeon Gold 6258R CPU for computing. |
| Software Dependencies | No | We use Pytorch and Cython to implement our models. Our implementation is based on Cython, GNU Scientific Library (Galassi et al., 2002) and Sca Fa Co S (Bolten et al.), in which the native incomplete Gamma function and incomplete Bessel function are used. |
| Experiment Setup | Yes | All Pot Net models are trained using the Adam (Kingma & Ba, 2014) optimizer with weight decay (Loshchilov & Hutter, 2017) and one cycle learning rate scheduler (Smith & Topin, 2019) with a learning rate of 0.001, training epoch of 500, and batch size of 64. |