Class-Attentive Diffusion Network for Semi-Supervised Classification

Authors: Jongin Lim, Daeho Um, Hyung Jin Chang, Dae Ung Jo, Jin Young Choi8601-8609

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on seven benchmark datasets consistently demonstrate the efficacy of the proposed method and our CAD-Net significantly outperforms the state-of-the-art methods.
Researcher Affiliation Academia Jongin Lim1, Daeho Um1, Hyung Jin Chang2, Dae Ung Jo1, Jin Young Choi1 1 Department of ECE, ASRI, Seoul National University 2 School of Computer Science, University of Birmingham
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code Yes Code is available at https://github.com/ljin0429/CAD-Net.
Open Datasets Yes We conducted experiments on 7 benchmark datasets from 3 different graph domains: Citation Networks (Cite Seer, Cora, and Pub Med), Recommendation Networks (Amazon Computers and Amazon Photo), and Co-authorship Networks (Coauthor CS and Coauthor Physics).
Dataset Splits Yes For citation networks, we followed the standard benchmark setting suggested in (Yang, Cohen, and Salakhutdinov 2016). We evaluated on the same train/validation/test split, which uses 20 nodes per class for train, 500 nodes for validation, and 1000 nodes for test. For recommendation and co-authorship networks, we closely followed the experimental setup in (Chen et al. 2019). We used 20 nodes per class for train, 30 nodes per class for validation, and the rest nodes for test. We randomly split the nodes and report the average accuracy (%) with the standard deviation evaluated on 100 random splits.
Hardware Specification Yes To further validate the computational efficiency of CAD-Net, we compared the average training time per epoch (ms) measured on a single Nvidia GTX 1080 Ti machine.
Software Dependencies No The paper mentions 'implemented in PyTorch' and 'Adam optimizer' but does not specify their version numbers.
Experiment Setup Yes Our models are implemented in PyTorch and trained on a single GPU. For all datasets, we used Adam optimizer (Kingma and Ba 2014) with an initial learning rate of 0.01 with a decay rate of 0.5 per 100 epochs, a weight decay of 5e-4, and the number of epochs to 1000. We used 20 as the number of diffusion steps K. For citation networks (Cite Seer, Cora, and Pub Med), the dimension of the hidden layer for feature embedding network fθ is 16. For recommendation and co-authorship networks (Amazon Computers and Amazon Photo, Coauthor CS and Coauthor Physics), it is 64. The sensitivity β is set to 0.7 for all datasets. We used early stopping with patience 200 on the validation set.