Neural P$^3$M: A Long-Range Interaction Modeling Enhancer for Geometric GNNs
Authors: Yusong Wang, Chaoran Cheng, Shaoning Li, Yuxuan Ren, Bin Shao, Ge Liu, Pheng-Ann Heng, Nanning Zheng
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Neural P3M exhibits flexibility across a wide range of molecular systems and demonstrates remarkable accuracy in predicting energies and forces, outperforming on benchmarks such as the MD22 dataset. It also achieves an average improvement of 22% on the OE62 dataset while integrating with various architectures. |
| Researcher Affiliation | Collaboration | Yusong Wang1 , Chaoran Cheng2 , Shaoning Li3 , Yuxuan Ren4 Bin Shao5, Ge Liu2, Pheng-Ann Heng3, Nanning Zheng1 1 National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, and Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University 2 University of Illinois Urbana-Champaign 3 Department of Computer Science and Engineering, The Chinese University of Hong Kong 4 University of Science and Technology of China 5 Microsoft Research AI4Science |
| Pseudocode | Yes | A pseudocode for the Neural P3M block is provided in Appendix D.1 to enhance understanding. The pseudocode for the Neural P3M block is presented in Algorithm 1 as a general framework for iteratively and interdependently updating the atom features h and mesh features m. |
| Open Source Code | Yes | Codes are available at https://github.com/Only Love KFC/Neural_P3M. |
| Open Datasets | Yes | All datasets used in this paper are publicly and freely accessible. We have included sufficient instructions to the datasets and our experimental settings in Section 4. |
| Dataset Splits | Yes | For consistency with Allegro, we randomly split them into 950 structures for training, 50 structures for validation and the remaining structures for testing. The dataset is strictly split into train, validation, and test set according to Ewald MP [12]. |
| Hardware Specification | Yes | Experiments are conducted on a NVIDIA 16G V100 GPU. Experiments are conducted on a NVIDIA 80G A100 GPU. |
| Software Dependencies | No | The paper mentions the use of PyTorch for FFT acceleration, but does not provide specific version numbers for all key software components (e.g., Python, other libraries, or specific GNN implementations used as baselines/integrated models) necessary for full reproducibility. |
| Experiment Setup | Yes | The initial learning rate is set to 0.0018, is preceded by a warm-up phase of 1000 steps. In our loss function, energy and force are weighted at a ratio of 0.1 / 0.9, respectively... The initial learning rate is carefully tuned within the range of 0.001 to 0.0018 to optimize performance. Additionally, the weights of energy and force in the loss function is customized for different molecules, with supramolecules using a weight of 0.005 for energy and 0.995 for force, while other molecules using a ratio of 0.05 / 0.95. |