Multi-View Robust Graph Representation Learning for Graph Classification
Authors: Guanghui Ma, Chunming Hu, Ling Ge, Hong Zhang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments and visualizations on eight benchmark dataset demonstrate the effectiveness of MGRL. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Beihang University, Beijing, China 2College of Software, Beihang University, Beijing, China 3Zhongguancun Laboratory, Beijing, China 4National Computer Network Emergency Response Technical Team / Coordination Center of China |
| Pseudocode | No | The paper does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not explicitly state that the source code for the methodology is available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate our approach on eight benchmark datasets from TUDataset [Morris et al., 2020], including four social networks datasets, such as IMDB-BINARY (IB), IMDB-MULTI (IM), COLLAB (CO) and REDDIT-BINARY (RB), and four bioinformatics datasets, such as PROTEINS (PR), DD, NCI1 (NC) and Mutagenicity (MUT). |
| Dataset Splits | Yes | We randomly select 80% of the data for the training set, 10% for the validation set, and the remaining 10% for the test set. |
| Hardware Specification | Yes | We use PyTorch to implement our model on a Linux machine with a GPU device Tesla V100 SXM2 32 GB. |
| Software Dependencies | No | The paper mentions using PyTorch and Adam optimizer, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | During the practical implementation, the batch size is set to 32, the dropout is set to 0.5, the moving average coefficient λ is set to 0.0001, and the α1, α2, α3, α4 are set to 0.001, 1, 1, 1, respectively. We utilize a grid search technique to select the other best hyper-parameters with the learning rate selected from {0.01, 0.05, 0.001, 0.005}, the temperature parameter τ1, τ2 and τ3 selected from {0.07, 0.1, 0.3, 0.5, 0.7, 0.9} and the K selected from {2,4,6,8,10}. |