Continuous Graph Neural Networks

Authors: Louis-Pascal Xhonneux, Meng Qu, Jian Tang

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the task of node classification demonstrate the effectiveness of our proposed approach over competitive baselines. In this section, we evaluate the performance of our proposed approach on the semi-supervised node classification task.
Researcher Affiliation Academia 1Mila Quebec AI Institute, Montréal, Canada 2University of Montréal, Montréal, Canada 3HEC Montréal, Montréal, Canada 4CIFAR AI Research Chair.
Pseudocode No No structured pseudocode or algorithm blocks clearly labeled as 'Pseudocode' or 'Algorithm' were found.
Open Source Code No The paper does not provide an explicit statement about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes In our experiment, we use four benchmark datasets for evaluation, including Cora, Citeseer, Pubmed, and NELL. Following existing studies (Yang et al., 2016; Kipf and Welling, 2016; Veliˇckovi c et al., 2017), we use the standard data splits from (Yang et al., 2016) for Cora, Citeseer and Pubmed, where 20 nodes of each class are used for training and another 500 labeled nodes are used for validation.
Dataset Splits Yes Following existing studies (Yang et al., 2016; Kipf and Welling, 2016; Veliˇckovi c et al., 2017), we use the standard data splits from (Yang et al., 2016) for Cora, Citeseer and Pubmed, where 20 nodes of each class are used for training and another 500 labeled nodes are used for validation.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments.
Experiment Setup No The paper mentions using a 'random hyperparameter search' but does not explicitly state the specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations used for the final models.