Graph Policy Network for Transferable Active Learning on Graphs

Authors: Shengding Hu, Zheng Xiong, Meng Qu, Xingdi Yuan, Marc-Alexandre Côté, Zhiyuan Liu, Jian Tang

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on multiple datasets from different domains prove the effectiveness of the learned policy in promoting active learning performance in both settings of transferring between graphs in the same domain and across different domains.
Researcher Affiliation Collaboration 1Tsinghua University, 2Mila-Quebec AI Institute, 3Microsoft Research 4HEC Montreal, Canada, 5Université de Montréal, 6CIFAR AI Research Chair
Pseudocode Yes Due to the space limit, we give the detailed pseudo-code for policy training and transfer in Appendix A.
Open Source Code Yes Our code is publicly available at https://github.com/Shengding Hu/Graph Policy Network Active Learning
Open Datasets Yes For transferable active learning on graphs from the same domain, we use a multi-graph dataset collected from Reddit3, which consists of 5 graphs. For transferable active learning on graphs from different domains, we adopt 5 widely used benchmark datasets: Cora, Citeseer and Pubmed, Coauthor-Physics and Coauthor-CS [23].
Dataset Splits Yes On each graph, we set the sizes of validation and test sets as 500 and 1000 respectively and use all remaining nodes as the candidate training samples for annotation.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running its experiments. It discusses software and datasets but omits hardware specifications.
Software Dependencies No The paper mentions using GCN and Adam as the optimizer, but does not provide specific version numbers for these or other software components like programming languages or libraries (e.g., Python, PyTorch/TensorFlow versions).
Experiment Setup Yes We implement the policy network as a two-layer GCN [15] with a hidden layer size of 8. We use Adam [14] as the optimizer with a learning rate of 0.01. The policy network is trained for a maximum of 2000 episodes with a batch size of 5. ... For the classification network, we implement it as a two-layer GCN with a hidden layer size of 64. We use Adam as the optimizer with a learning rate of 0.03 and a weight decay of 0.0005.