Quantum 3D Graph Learning with Applications to Molecule Embedding
Authors: Ge Yan, Huaijin Wu, Junchi Yan
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two downstream tasks, molecular property prediction and 3D molecular geometries generation, demonstrate the effectiveness of our model. We conducted numerical experiments on the QM9 dataset for both molecular property prediction task and molecular geometries generation task. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, and Mo E Key Lab of Artiļ¬cial Intelligence, Shanghai Jiao Tong University. |
| Pseudocode | No | The paper includes figures showing a scheme and a circuit diagram (Figure 1 and Figure 2), but no structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states, 'The source code is written by Py Torch, and we use Torch Quantum (Wang et al., 2022a) as the quantum simulator.' This refers to a third-party tool, not the authors' own implementation code being open-sourced or linked. |
| Open Datasets | Yes | The benchmark is QM9 (Ramakrishnan et al., 2014), which is widely used for predicting various properties of molecules and 3D molecules generating tasks. |
| Dataset Splits | Yes | The dataset is split into training/validation/test sets. The training set contains 110,000, the validation set contains 10,000, and the test set contains 10,831 molecules. |
| Hardware Specification | Yes | All the experiments are performed on a single machine with 1TB memory, one physical CPU with 28 cores Intel(R) Xeon(R) W-3175X CPU @ 3.10GHz), and two GPUs (Nvidia Quadro RTX 8000). |
| Software Dependencies | No | The paper states 'The source code is written by Py Torch, and we use Torch Quantum (Wang et al., 2022a) as the quantum simulator', but it does not specify version numbers for PyTorch or Torch Quantum. |
| Experiment Setup | Yes | We use stochastic gradient descent (SGD) with Adam optimizer (Kingma & Ba, 2014) to train our model for a maximum of 100 epochs with a batch size of 32 and a learning rate 0.01. ... We use Adam optimizer to train the our model for 100 epochs, with a batch size of 64 and a learning rate of 0.001. |