Contrastive Graph Poisson Networks: Semi-Supervised Learning with Extremely Limited Labels

Authors: Sheng Wan, Yibing Zhan, Liu Liu, Baosheng Yu, Shirui Pan, Chen Gong

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted extensive experiments on different types of datasets to demonstrate the superiority of CGPN. Experimental results on benchmark datasets confirm the strong benefits of our proposed CGPN when dealing with semi-supervised node classification at very low label rates.
Researcher Affiliation Collaboration Sheng Wan1,2,3, , Yibing Zhan3, Liu Liu4, Baosheng Yu4, Shirui Pan5, Chen Gong1,2,3, 1PCA Lab, Key Lab of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education 2Jiangsu Key Lab of Image and Video Understanding for Social Security, School of Computer Science and Engineering, Nanjing University of Science and Technology 3JD Explore Academy 4The University of Sydney 5Department of Data Science and AI, Faculty of IT, Monash University
Pseudocode No The paper describes mathematical equations and a framework diagram but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it state that code is released or available.
Open Datasets Yes The experiments are conducted on four commonly used benchmark datasets, including three widely-used citation networks (i.e., Cora, Cite Seer, and Pub Med) [11, 40], and one Amazon product co-purchase networks (i.e., Amazon Photo) [41]. The dataset statistics are summarized in Table 1.
Dataset Splits No For all the adopted datasets, we randomly choose one, two, three, and four labeled nodes per class for training, respectively, in order to evaluate the model performance under label-scarce settings. The paper mentions choosing labeled nodes for training but does not provide specific overall train/validation/test splits, percentages, or absolute counts for the entire dataset partition, nor details on how validation sets were created or used explicitly.
Hardware Specification Yes The experiments are conducted on a Linux server equipped with a Tesla P40 GPU.
Software Dependencies No The paper mentions implementing models (e.g., GCN, GAT) and using a single-head attention mechanism but does not provide specific version numbers for any software dependencies like Python, PyTorch, TensorFlow, or specific libraries.
Experiment Setup No The hyperparameters, such as the number of hidden units and the learning rate, are determined via grid search. The paper mentions that hyperparameters are determined via grid search but does not provide the specific concrete values for these hyperparameters (e.g., learning rate = 0.01, batch size = 32) or other training configurations.