Conformal Crystal Graph Transformer with Robust Encoding of Periodic Invariance

Authors: Yingheng Wang, Shufeng Kong, John M. Gregoire, Carla P. Gomes

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through comprehensive evaluation, we verify our model s superior performance in 5 crystal prediction tasks, reaffirming the efficiency of our proposed methods. and conducting comprehensive experiments over 5 tasks on the Jarvis materials benchmark (Choudhary et al. 2020), highlighting the significant of our components and showing our model s superior performance, and subsequently verifying the effectiveness of our proposed construction and learning methods.
Researcher Affiliation Academia 1Department of Computer Science, Cornell University, USA 2Liquid Sunlight Alliance, California Institute of Technology, USA 3School of Software Engineering, Sun Yat-sen University, China
Pseudocode No The paper includes 'Figure 1: Architecture Overview' which illustrates the model's components, but it is a high-level diagram and not a structured pseudocode or algorithm block with step-by-step instructions.
Open Source Code No The paper does not provide any explicit statement or link for open-source code for the described methodology.
Open Datasets Yes We test on five crystal property prediction tasks using the JARVIS (Choudhary et al. 2020) benchmark, specifically its DFT-2021.8.18 3D version, which features 55,722 crystals.
Dataset Splits Yes We adopt data splits as per (Lin et al. 2023; Yan et al. 2022) to ensure a fair comparison.
Hardware Specification No The paper discusses computational efficiency but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions various optimizers and baselines, but it does not specify version numbers for any software dependencies or libraries (e.g., 'PyTorch 1.9' or 'Python 3.8').
Experiment Setup Yes Learning rates and training epochs are mildly adjusted, starting from 0.0005 and 1000 respectively, depending on the task. Specific configurations for each task can be found in the Appendix.