Ewald-based Long-Range Message Passing for Molecular Graphs
Authors: Arthur Kosmala, Johannes Gasteiger, Nicholas Gao, Stephan Günnemann
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test the approach with four baseline models and two datasets containing diverse periodic (OC20) and aperiodic structures (OE62). We observe robust improvements in energy mean absolute errors across all models and datasets, averaging 10 % on OC20 and 16 % on OE62. Our analysis shows an outsize impact of these improvements on structures with high longrange contributions to the ground truth energy. |
| Researcher Affiliation | Collaboration | 1Max Planck Institute for the Science of Light & Friedrich Alexander-Universit at Erlangen-N urnberg, Germany 2Department of Computer Science & Munich Data Science Institute, Technical University of Munich, Germany 3Department of Physics, Ludwig Maximilians-Universit at M unchen, Germany 4Google Research. |
| Pseudocode | No | The paper does not contain any clearly labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code or a direct link to a code repository for the described methodology. |
| Open Datasets | Yes | The OC20 dataset (Chanussot et al., 2021) features adsorption energies and atom forces for roughly 265 million structures... The OE62 dataset (Stuke et al., 2020) features, among other targets, DFT-computed energies (in e V) for roughly 62,000 large organic molecules. |
| Dataset Splits | Yes | We train our models on the OC20-2M subsplit... We partition the data in ca. 50000 structures for OE62-train, and ca. 6000 structures for each of OE62-val and OE62-test. |
| Hardware Specification | Yes | We run all models on Nvidia A100 GPUs and evaluate the runtimes of all model configurations in one session and on the same machine. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | All models have an initial learning rate of 1 10 4 except for Gem Net-T (5 10 4). As in the Pai NN reference (Sch utt et al., 2021), we use the Adam optimizer with weight decay λ = 0.01, along with a plateau scheduler (patience 10 and decay factor 0.5). |