Graph inference learning for semi-supervised classification

Authors: Chunyan Xu, Zhen Cui, Xiaobin Hong, Tong Zhang, Jian Yang, Wei Liu

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive evaluations on four benchmark datasets (including Cora, Citeseer, Pubmed, and NELL) demonstrate the superiority of our proposed GIL when compared against state-of-the-art methods on the semi-supervised node classification task.
Researcher Affiliation Collaboration Chunyan Xu, Zhen Cui , Xiaobin Hong, Tong Zhang, and Jian Yang School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China {cyx,zhen.cui,xbhong,tong.zhang,csjyang}@njust.edu.cn Wei Liu Tencent AI Lab, China wl2223@columbia.edu
Pseudocode No The paper describes the methods using text and mathematical equations but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (e.g., a specific repository link or an explicit statement of code release) for the methodology described.
Open Datasets Yes We evaluate our proposed GIL method on three citation network datasets: Cora, Citeseer, Pubmed (Sen et al., 2008), and one knowledge graph NELL dataset (Carlson et al., 2010).
Dataset Splits Yes Following the previous protocol in (Kipf & Welling, 2017; Zhuang & Ma, 2018), we split the graph data into a training set, a validation set, and a testing set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions general techniques and activation functions (e.g., 'Re LU unit', 'stochastic gradient descent method') but does not specify any software dependencies with version numbers (e.g., library names like PyTorch or TensorFlow with their versions).
Experiment Setup Yes The GIL model consists of two graph convolution layers, each of which is followed by a mean-pooling layer, a class-to-node relationship regression module, and a final softmax layer. [...] The channels of the 1-st and 2-nd convolutional layers are set to 128 and 256, respectively. The scale of the respective filed is 2 in both convolutional layers. The dropout rate is set to 0.5 in the convolution and fully connected layers to avoid over-fitting, and the Re LU unit is leveraged as a nonlinear activation function. We pre-train our proposed GIL model for 200 iterations with the training set, where its initial learning rate, decay factor, and momentum are set to 0.05, 0.95, and 0.9, respectively. Here we train the GIL model using the stochastic gradient descent method with the batch size of 100. We further improve the inference learning capability of the GIL model for 1200 iterations with the validation set, where the meta-learning rates α and β are both set to 0.001.