Graph Sampling-based Meta-Learning for Molecular Property Prediction
Authors: Xiang Zhuang, Qiang Zhang, Bin Wu, Keyan Ding, Yin Fang, Huajun Chen
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 5 commonlyused benchmarks show GS-Meta consistently outperforms state-of-the-art methods by 5.71%-6.93% in ROC-AUC and verify the effectiveness of each proposed module. |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, Zhejiang University 2ZJU-Hangzhou Global Scientific and Technological Innovation Center 3Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies |
| Pseudocode | Yes | Algorithm 1 Training and optimization algorithm. |
| Open Source Code | Yes | Our code is available at https: //github.com/HICAI-ZJU/GS-Meta. |
| Open Datasets | Yes | We use five common few-shot molecular property prediction datasets from the Molecule Net [Wu et al., 2018]. |
| Dataset Splits | No | The paper defines the training set Dtrain and testing set Dtest for tasks, and how episodes are constructed with support and query sets (2-way K-shot), but it does not specify a separate, explicit validation dataset split for the overall benchmark datasets. |
| Hardware Specification | No | The paper acknowledges "Hangzhou AI Computing Center for their technical support" but does not provide specific details such as GPU models, CPU types, or other hardware specifications used for experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or specific library versions). |
| Experiment Setup | Yes | Appendix A.2 Training Details: We use Adam optimizer with learning rate 0.001. βinner = 0.01 and βouter = 0.001. The batch size is 32. The number of iterations for the inner loop is 1. The number of iterations for the outer loop is 1. We train for 100 epochs. τ = 0.5. For the graph neural network, we use a 2-layer GNN. The hidden dimension is 128. |