GeoTMI: Predicting Quantum Chemical Property with Easy-to-Obtain Geometry via Positional Denoising
Authors: Hyeonsu Kim, Jeheon Woo, SEONGHWAN KIM, Seokhyun Moon, Jun Hyeong Kim, Woo Youn Kim
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our results showed consistent improvements in accuracy across various tasks, demonstrating the effectiveness and robustness of Geo TMI. and 4 Experiments We have tried to demonstrate the effectiveness of Geo TMI in providing a new solution to the infeasibility of high-level 3D geometry, rather than focusing on the performance of the state-of-the-art GNN architecture itself. |
| Researcher Affiliation | Academia | Hyeonsu Kim Department of Chemistry KAIST Daejeon, South Korea Jeheon Woo Department of Chemistry KAIST Daejeon, South Korea Seonghwan Kim Department of Chemistry KAIST Daejeon, South Korea Seokhyun Moon Department of Chemistry KAIST Daejeon, South Korea Jun Hyeong Kim Department of Chemistry KAIST Daejeon, South Korea Woo Youn Kim Department of Chemistry KAIST Daejeon, South Korea |
| Pseudocode | No | The paper includes a detailed illustration of its encoder architecture (Figure 1C) and describes its methods textually and with equations, but it does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available on Github. |
| Open Datasets | Yes | The QM9 [30] is widely used benchmark dataset for molecular property prediction... and We used two datasets, released by Grambow et al. [31], for comparison with the previous work. and The OC20 dataset contains data... |
| Dataset Splits | Yes | For all tested models, we used 100,000, 18,000, and 13,000 molecular data for training, validation, and testing, respectively, as in previous work by Satorras et al. [15]. and The used data split and augmentation were the same as in the previous work by Spiekermann et al. [55]. |
| Hardware Specification | Yes | All experiments were conducted using RTX 2080 Ti GPU with 12 GB of memory, RTX 3080 Ti GPU with 12 GB of memory, or RTX A4000 GPU with 16 GB of memory. GNN models were trained on a single GPU, except for those in the IS2RE task of OC20, where we used eight RTX A4000 GPUs. |
| Software Dependencies | No | The paper mentions various GNN models and datasets used (e.g., EGNN, Sch Net, Dime Net++, QM9, OC20) but does not provide specific version numbers for software dependencies like programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | The detailed hyperparameters of each model are introduced in Appendix C.1. and The hyperparameters used are described in Appendix C.1. and We note that the hyperparameters used are the same as in the previous work, except for the number of transformer blocks to train each model, due to the limitation of our computational resources. |