Graph Agreement Models for Semi-Supervised Learning
Authors: Otilia Stretcu, Krishnamurthy Viswanathan, Dana Movshovitz-Attias, Emmanouil Platanios, Sujith Ravi, Andrew Tomkins
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We performed a set of experiments to test different properties of GAM. First, we tested the generality of GAM by applying our approach to Multilayer Perceptrons (MLP), Convolutional Neural Networks (CNN), Graph Convolution Networks (GCN) [15], and Graph Attention Networks (GAT) [35]2. Next, we tested the robustness of GAM when faced with noisy graphs, as well as evaluated GAM and GAM* with and without a provided graph, comparing them with the state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Google Research, Carnegie Mellon University ostretcu@cs.cmu.edu,{kvis,danama}@google.com, e.a.platanios@cs.cmu.edu,{tomkins,sravi}@google.com |
| Pseudocode | No | The paper includes diagrams illustrating the learning paradigm and co-training algorithm, and describes the steps of the learning algorithm in prose, but does not present a formal pseudocode block or algorithm listing. |
| Open Source Code | Yes | Our experiments were performed using a single Nvidia Titan X GPU, and our implementation can be found at https://github.com/tensorflow/ neural-structured-learning. |
| Open Datasets | Yes | We obtained three public datasets from Yang et al. [38]: Cora [19], Citeseer [5], and Pubmed [25], which have become the de facto standard for evaluating graph node classification algorithms. ... we tested GAM on the popular CIFAR-10 [16] and SVHN [26] datasets. |
| Dataset Splits | Yes | We used the same train/validation/test splits as Yang et al. [39], which have been used by the methods we compare to. ... For evaluation, we use the setup and train/validation/test splits provided by [27] |
| Hardware Specification | Yes | Our experiments were performed using a single Nvidia Titan X GPU, and our implementation can be found at https://github.com/tensorflow/ neural-structured-learning. |
| Software Dependencies | No | We implemented our models in Tensor Flow [1]. Parameter updates are using the Adam optimizer [14] with default Tensor Flow parameters... |
| Experiment Setup | Yes | Parameter updates are using the Adam optimizer [14] with default Tensor Flow parameters, and initial learning rate of 0.001 for MLPs and GCN, and 0.005 for GAT (based on the original publication [35]). When training the classification model, we used a batch size of 128 for both the supervised term and for the edges in each of the LL, LU, and UU terms. We stopped training when validation accuracy did not increase in the last 2000 iterations, and reported the test accuracy at the iteration with the best validation performance. For the agreement model, we sampled random batches containing pairs of nodes... In both cases, we ensured a ratio of 50% positives (labels agree) and 50% negatives (labels disagree). ... We started with 20 labeled examples per class and, when extending the labeled node set, we added the M most confident predictions of the classifier over unlabeled nodes. In our experiments, we set M = 200, but doing parameter selection for M as well could potentially lead to even better results. To avoid adding incorrectly-labeled nodes we filtered out predictions where the classification confidence (i.e., the maximum probability assigned to one of the labels) was lower than 0.4 (since the smallest number of classes considered is 3 for Pubmed, making chance classification probability 0.33). |