Deep Wasserstein Graph Discriminant Learning for Graph Classification
Authors: Tong Zhang, Yun Wang, Zhen Cui, Chuanwei Zhou, Baoliang Cui, Haikuan Huang, Jian Yang10914-10922
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate our WGDL method, comprehensive experiments are conducted on six graph classification datasets. Experimental results demonstrate the effectiveness of our WGDL, and state-of-the-art performance. |
| Researcher Affiliation | Collaboration | 1Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China. 2 Alibaba Group, Hangzhou, China {tong.zhang, yun.wang, zhen.cui, cwzhou}@njust.edu.cn, moqing.cbl@taobo.com, haikuan.hhk@alibaba-inc.com, csjyang@njust.edu.cn |
| Pseudocode | No | The paper describes the methodology in text and equations but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | The bioinformatics datasets include MUTAG (Debnath et al. 1991), PTC (Toivonen et al. 2003), PROTEINS (Borgwardt et al. 2005), and NCI1 (Wale, Watson, and Karypis 2008), while the social networks datasets contain IMDB-BINARY and IMDB-MULTI. |
| Dataset Splits | Yes | By strictly following the employed protocol in previous literatures, we evaluate the performance by 10-fold cross-validation on all employed datasets. Each dataset is divided into 10 sessions, of which nine sessions are used for training and the remaining one session for testing. The experiments are repeated 10 times in which process the session used for testing is enumerated. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU/CPU models, memory). |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python version, library versions). |
| Experiment Setup | Yes | For each input graph, each node is initialized based on the node label by using a one-hot vector. Both the graph encoder and the momentum graph encoder stack 3 layers of GCNs, where the output dimensions are 512, 256, and 32 respectively. ...the key number for each class is set the same (10 for each class here) for all datasets. ...We set 2 fully connected layers to further learn the query-resulted vectors, and their output dimensions equal 64 and the class number, respectively. ...the momentum coefficient m of 0.999... the parameter λ for calculating the OT matrix in Eqn. 3 and Eqn. 11 , together with the trade-off parameter β in Eqn. 10, are set to 1. We train our model with Adam optimizer for 300 epochs, where the weight decay is 10^-4 and the learning rate is 0.001. |