Graph Wavelet Neural Network

Authors: Bingbing Xu, Huawei Shen, Qi Cao, Yunqi Qiu, Xueqi Cheng

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed GWNN significantly outperforms previous spectral graph CNNs in the task of graph-based semi-supervised classification on three benchmark datasets: Cora, Citeseer and Pubmed.Experimental results demonstrate that our method consistently outperforms previous spectral CNNs on three benchmark datasets, i.e., Cora, Citeseer, and Pubmed.4 EXPERIMENTS
Researcher Affiliation Academia 1CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences; 2School of Computer and Control Engineering, University of Chinese Academy of Sciences Beijing, China
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes To evaluate the proposed GWNN, we apply GWNN on semi-supervised node classification, and conduct experiments on three benchmark datasets, namely, Cora, Citeseer and Pubmed (Sen et al., 2008).
Dataset Splits Yes The partition of datasets is the same as GCN (Kipf & Welling, 2017) with an additional validation set of 500 labeled samples to determine hyper-parameters.Following the experimental setup of GCN (Kipf & Welling, 2017), we fetch 20 labeled nodes per class in each dataset to train the model.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions optimizers (Adam) and toolboxes (GSP toolbox) but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We train a two-layer graph wavelet neural network with 16 hidden units.We adopt the Adam optimizer (Kingma & Ba, 2014) for parameter optimization with an initial learning rate lr = 0.01.For Cora, s = 1.0 and t = 1e 4. For Citeseer, s = 0.7 and t = 1e 5. For Pubmed, s = 0.5 and t = 1e 7.To avoid overfitting, dropout (Srivastava et al., 2014) is applied.