Simple Spectral Graph Convolution

Authors: Hao Zhu, Piotr Koniusz

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluations show that S2GC with a linear learner is competitive in text and node classification tasks. Moreover, S2GC is comparable to other state-of-the-art methods for node clustering and community prediction tasks. In this section, we evaluate the proposed method on four different tasks: node clustering, community prediction, semi-supervised node classification and text classification.
Researcher Affiliation Academia Hao Zhu, Piotr Koniusz Australian National University Canberra, Australia {hao.zhu,piotr.koniusz}@anu.edu.au Data61/CSIRO Canberra, Australia
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The code is available at https://github.com/allenhaozhu/SSGC.
Open Datasets Yes We compare S2GC with three variants of clustering... on four datasets: Cora, Cite Seer, Pub Med, and Wiki... For the semi-supervised node classification task, we apply the standard fixed training, validation and testing splits (Yang et al., 2016) on the Cora, Citeseer, and Pubmed datasets... We ran our experiments on five widely used benchmark corpora including the Movie Review (MR), 20-Newsgroups (20NG), Ohsumed, R52 and R8 of Reuters 21578.
Dataset Splits Yes For the semi-supervised node classification task, we apply the standard fixed training, validation and testing splits (Yang et al., 2016) on the Cora, Citeseer, and Pubmed datasets, with 20 nodes per class for training, 500 nodes for validation and 1,000 nodes for testing. ... The 10% of training set is randomly selected for validation.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, memory, or cloud computing specifications used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch' and other software components like 'Adam SGD optimizer' and 'Meta Opt package' but does not specify their version numbers for reproducibility.
Experiment Setup Yes We use the Adam SGD optimizer (Kingma & Ba, 2014) with a learning rate of 0.02 to train S2GC. We set α = 0.05 and K = 16 on all datasets. ... For Text GCN, SGC, and our approach, the embedding size of the first convolution layer is 200 and the window size is 20. We set the learning rate to 0.02, dropout rate to 0.5 and the decay rate to 0. ... we trained our method and Text GCN for a maximum of 200 epochs using the Adam (Kingma & Ba, 2014) optimizer, and we stop training if the validation loss does not decrease for 10 consecutive epochs.