Belief Propagation Network for Hard Inductive Semi-Supervised Learning
Authors: Jaemin Yoo, Hyunsik Jeon, U Kang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to demonstrate the superior performance of BPN, which shows the highest classification accuracy on four datasets compared with the state-of-the-art approaches for inductive learning. |
| Researcher Affiliation | Academia | Jaemin Yoo , Hyunsik Jeon and U Kang Seoul National University {jaeminyoo, jeon185, ukang}@snu.ac.kr |
| Pseudocode | Yes | Algorithm 1 Belief Propagation Network (BPN) |
| Open Source Code | No | The paper mentions obtaining 'public implementations of the baseline methods' and provides a GitHub link for Planetoid (a baseline method), but does not state that the source code for BPN is publicly available or provided. |
| Open Datasets | Yes | We use four datasets summarized in Table 2. The first three datasets [Sen et al., 2008] were used to evaluate the previous approaches [Velickovic et al., 2018]. ... We also use an Amazon dataset based on [Mc Auley et al., 2015; He and Mc Auley, 2016]. |
| Dataset Splits | Yes | For each dataset, we use 20 nodes of each class for training, 1,000 nodes for testing, and 500 nodes for validation as done in [Kipf and Welling, 2017]. |
| Hardware Specification | Yes | Our experiments are done in a workstation with Geforce GTX 1080 Ti. |
| Software Dependencies | No | The paper mentions using 'a recent deep learning framework' and 'Adam' as an optimizer but does not specify software names with version numbers (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x). |
| Experiment Setup | Yes | We use a feedforward neural network with one hidden layer as a classifier f. ... The number of hidden units is set to 32, and we use dropout [Srivastava et al., 2014] of probability 0.5. Adam [Kingma and Ba, 2015] is used as an optimizer for all datasets with different step sizes determined by validation performances. ... = 0.05, λ = 10 4, and β = 0.9 in Cora, and is changed to 0.01 in Citeseer. ... = 1.0 and β = 0.5. We lastly set λ = 2 10 2 in Pub Med ... The number of diffusion operations is set to one in all datasets |