TGNN: A Joint Semi-supervised Framework for Graph-level Classification
Authors: Wei Ju, Xiao Luo, Meng Qu, Yifan Wang, Chong Chen, Minghua Deng, Xian-Sheng Hua, Ming Zhang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our TGNN on various public datasets and show that it achieves strong performance. |
| Researcher Affiliation | Collaboration | 1School of Computer Science, Peking University, China 2School of Mathematical Sciences, Peking University, China 3Mila Qu ebec AI Institute, Universit e de Montr eal, Canada 4DAMO Academy, Alibaba Group, China |
| Pseudocode | Yes | Algorithm 1 TGNN s main learning algorithm |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a URL or an explicit statement) for open-source code for the described methodology. |
| Open Datasets | Yes | Benchmark Datasets. We evaluate our proposed TGNN using seven publicly accessible datasets (i.e., PROTEINS, DD, IMDB-B, IMDB-M, REDDIT-B, REDDIT-M-5k and COLLAB [Yanardag and Vishwanathan, 2015]) and two largescale OGB datasets (i.e., OGB-HIV, OGB-MUV). |
| Dataset Splits | Yes | Following Dual Graph [Luo et al., 2022], we adopt the same data split, in which the ratio of labeled training set, unlabeled training set, validation set and test set is 2:5:1:2. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions that 'All methods are implemented in Py Torch' but does not specify the version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For the proposed TGNN, we empirically set the embedding dimension to 64, the number of epochs to 300, and batch size to 64. We modify GIN [Xu et al., 2019] to parameterize the message passing module fθ, consisting of three convolution layers and one pooling layer with an attention mechanism. For our graph kernel module gφ, we empirically set the number of hidden graphs to 16 and their size equal to 5 nodes. The maximum length of random walk P is set to 3. Finally, we use Adam to optimize all the models. |