Learning Attributed Graph Representation with Communicative Message Passing Transformer
Authors: Jianwen Chen, Shuangjia Zheng, Ying Song, Jiahua Rao, Yuedong Yang
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrated that the proposed model obtained superior performances (around 4% on average) against state-of-the-art baselines on seven chemical property datasets (graph-level tasks) and two chemical shift datasets (node-level tasks). In this section, we evaluate the proposed model Co MPT on three kinds of tasks. |
| Researcher Affiliation | Collaboration | 1School of Computer Science and Engineering, Sun Yat-sen University 2School of Systems Science and Engineering, Sun Yat-sen University 3Key Laboratory of Machine Intelligence and Advanced Computing, Sun Yat-sen University 4Galixir Technologies Ltd, Beijing |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | https://github.com/jcchan23/CoMPT |
| Open Datasets | Yes | To enable head-to-head comparisons of Co MPT to existing molecular representation methods, we evaluated our proposed model on nine benchmark datasets across three kinds of tasks from [Wu et al., 2018] and [Jonas and Kuhn, 2019], each kind of which consists of 2 to 4 public benchmark datasets, including BBBP, Tox21, Sider, and Clin Tox for Graph Classification tasks, ESOL, Free Solv and Lipophilicity for Graph Regression tasks, chemical shift prediction of hydrogen and carbon for Node Regression tasks. |
| Dataset Splits | Yes | In the graph-level task, following the previous works, we utilized a 5-fold cross-validation and replicate experiments on each task five times. Note that we adopted the scaffold split method recommended by [Yang et al., 2019] to split the datasets into training, validation, and test, with a 0.8/0.1/0.1 ratio. ... In the node-level task, we follow the previous study [Jonas and Kuhn, 2019] by randomly splitting the dataset into 80% as the training set and 20% as the test set, and then use 95% of training data to train the model and the remaining 5% to validate the model for early stopping. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions architectural components like 'Transformer' and 'Gated Recurrent Unit (GRU)' but does not provide specific software dependencies with version numbers (e.g., specific library names like PyTorch or TensorFlow, along with their versions). |
| Experiment Setup | Yes | To improve model performance, we applied the grid search to obtain the best hyper-parameters of the models. In the graph-level task, following the previous works, we utilized a 5-fold cross-validation and replicate experiments on each task five times. Note that we adopted the scaffold split method recommended by [Yang et al., 2019] to split the datasets into training, validation, and test, with a 0.8/0.1/0.1 ratio. ... In the node-level task, we follow the previous study [Jonas and Kuhn, 2019] by randomly splitting the dataset into 80% as the training set and 20% as the test set, and then use 95% of training data to train the model and the remaining 5% to validate the model for early stopping. |