Complete and Efficient Graph Transformers for Crystal Material Property Prediction

Authors: Keqiang Yan, Cong Fu, Xiaofeng Qian, Xiaoning Qian, Shuiwang Ji

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the state-of-the-art predictive accuracy of Com Former variants on various tasks across three widely-used crystal benchmarks.
Researcher Affiliation Academia Keqiang Yan, Cong Fu, Xiaofeng Qian , Xiaoning Qian , Shuiwang Ji Texas A&M University College Station, TX 77843, USA {keqiangyan,congfu,xqian,feng,sji}@tamu.edu
Pseudocode No The paper describes the proposed methods using text and mathematical equations (e.g., in Section 4.1 for message passing) but does not include any explicitly labeled 'Algorithm' or 'Pseudocode' blocks.
Open Source Code Yes Our code is publicly available as part of the AIRS library (https://github.com/divelab/AIRS).
Open Datasets Yes We assess the expressiveness of our i Com Former and e Com Former models by conducting evaluations on three widely-used crystal benchmarks: JARVIS (Choudhary et al., 2020), the Materials Project (Chen et al., 2019), and Mat Bench (Dunn et al., 2020).
Dataset Splits Yes The training, evaluating, and testing sets for formation energy, total energy, and bandgap(OPT) prediction tasks contain 44578, 5572, and 5572 crystals while contain 44296, 5537, and 5537 crystals for Ehull, and contain 14537, 1817, 1817 for bandgap(MBJ).
Hardware Specification Yes We use one TITAN A100 for computing.
Software Dependencies No The paper mentions the use of the 'e3nn' library and optimizers like 'Adam', but it does not specify concrete version numbers for any software dependencies required to reproduce the experiments.
Experiment Setup Yes We provide detailed model settings of i Com Former and e Com Former, and demonstrate the hyperparameter settings of them for different datasets and tasks in this section. For all the experiments across three crystal datasets, we use two node-wise transformer layers, and one node-wise equivariant updating layer in between to form the e Com Former, and use one node-wise transformer layer, followed by one edge-wise transformer layer, and then l 1 node-wise transformer layers to form the l layer i Com Former.