Equivariant Transformers for Neural Network based Molecular Potentials

Authors: Philipp Thölke, Gianni De Fabritiis

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the equivariant Transformer on the QM9 (Ramakrishnan et al., 2014), MD17 (Chmiela et al., 2017) and ANI-1 (Smith et al., 2017b) benchmark datasets.
Researcher Affiliation Collaboration Philipp Th olke Computational Science Laboratory, Pompeu Fabra University, PRBB, C/ Doctor Aiguader 88, 08003 Barcelona, Spain and Institute of Cognitive Science, Osnabr uck University, Neuer Graben 29 / Schloss, 49074 Osnabr uck, Germany philipp.thoelke@posteo.de Gianni De Fabritiis Computational Science Laboratory, Pompeu Fabra University, C/ Doctor Aiguader 88, 08003 Barcelona, Spain and ICREA, Passeig Lluis Companys 23, 08010 Barcelona, Spain and Acellera Labs, C/ Doctor Trueta 183, 08005 Barcelona, Spain gianni.defabritiis@upf.edu
Pseudocode No The paper includes architectural diagrams (Figure 1) but does not present any pseudocode or algorithm blocks.
Open Source Code Yes all source code for training, running and analyzing the models presented in this work is available at github.com/torchmd/torchmd-net.
Open Datasets Yes The datasets QM91, MD172 and ANI-13 are publicly available and all source code for training, running and analyzing the models presented in this work is available at github.com/torchmd/torchmd-net. 1https://doi.org/10.6084/m9.figshare.c.978904.v5 2http://www.quantum-machine.org/gdml/#datasets 3https://figshare.com/articles/dataset/ANI-1x Dataset Release/10047041/1
Dataset Splits Yes The remaining molecules were split into a training set with 110,000 and a validation set with 10,000 samples, leaving 10,831 samples for testing. (for QM9) and the model is trained on only 1000 samples from which 50 are used for validation. (for MD17) and The model is fitted on DFT energies from 80% of the dataset, while 5% are used as validation and the remaining 15% of the data make up the test set. (for ANI-1)
Hardware Specification Yes All models in this work were trained using distributed training across two NVIDIA RTX 2080 Ti GPUs, using the DDP training protocol. and ...on an NVIDIA V100 GPU (see Table 3).
Software Dependencies No The paper mentions software like Py Torch, Py Torch Geometric, and pytorch-lightning, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Table 4: Comparison of various hyperparameters used for QM9, MD17 and ANI-1. Parameter QM9 MD17 ANI-1 initial learning rate 4 10 4 1 10 3 7 10 4 lr patience (epochs) 15 30 5 lr decay factor 0.8 0.8 0.5 lr warmup steps 10,000 1,000 10,000 batch size 128 8 2048 no. layers 8 6 6 no. RBFs 64 32 32 feature dimension 256 128 128 no. parameters 6.87M 1.34M 1.34M