Equivariant Networks for Crystal Structures

Authors: Oumar Kaba, Siamak Ravanbakhsh

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, these models achieve competitive results with state-of-the-art on property prediction tasks. and We perform experimental tests of our models on the Materials Project database and report results comparable to or better than baselines.
Researcher Affiliation Academia School of Computer Science, Mc Gill University Mila Quebec Artificial Intelligence Institute
Pseudocode No The paper describes mathematical equations for message passing but does not include any structured pseudocode or algorithm blocks.
Open Source Code No A link to the relevant code will also be accessible upon publication of the paper.
Open Datasets Yes We perform experiments using the Materials Project dataset [33] 1. This standard dataset of materials informatics comprises more than 120K materials... and Finally, we perform experiments using the Perov-5 dataset [9] as provided by [69].
Dataset Splits Yes Training, validation, and test splits are 80%, 10%, and 10% of the dataset.
Hardware Specification Yes The training was performed on a single NVIDIA A100 GPU and a dual Intel Gold 6248R CPU, 20 cores machine. Experiments were performed on Mila s compute cluster with Slurm.
Software Dependencies No The paper mentions using 'Pytorch [49]' and 'Pytorch Scatter package [22]' but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes The full hyperparameter setup is provided in Appendix A.7. and in Appendix A.7: The learning rate was chosen with a cosine annealing schedule with a warm-up period, starting from a learning rate of 1e-4 and decaying to 1e-6. The batch size was set to 64. The model was trained for 100 epochs, and the best model was chosen based on the validation set performance. We used a weight decay of 1e-2 and gradient clipping with a norm of 1.0. We use a hidden dimension of 128 for all layers, except the output layer.