Confidence-Based Feature Imputation for Graphs with Partially Known Features

Authors: Daeho Um, Jiwoong Park, Seulki Park, Jin young Choi

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our method, we conducted experiments for two main graph learning tasks: semi-supervised node classification and link prediction.
Researcher Affiliation Academia Daeho Um, Jiwoong Park, Seulki Park, Jin Young Choi Department of Electrical and Computer Engineering, ASRI Seoul National University {daehoum1,ptywoong,seulki.park,jychoi}@snu.ac.kr
Pseudocode Yes Appendix A.6 PYTORCH-STYLE PSEUDO-CODE OF PSEUDO-CONFIDENCE-BASED FEATURE IMPUTATION (PCFI)
Open Source Code Yes Codes are available at https://github.com/daehoum1/pcfi.
Open Datasets Yes We experimented with six benchmark datasets from two different domains: citation networks (Cora, Cite Seer, Pub Med (Sen et al., 2008) and OGBN-Arxiv (Hu et al., 2020)) and recommendation networks (Amazon-Computers and Amazon-Photo (Shchur et al., 2018)). All the datasets used in our experiments are publicly available from the MIT-licensed Pytorch Geometric package.
Dataset Splits Yes For semi-supervised node classification, we randomly generated 10 different training/validation/test splits, except OGBN-Arxiv where the split was fixed according to the specified criteria. For link prediction, we also randomly generated 10 different training/validation/test splits for each datasets. As the setting in (Klicpera et al., 2019), in each generated split, 20 nodes per class were assigned to the training nodes. Then, the number of validation nodes is determined by the number that becomes 1500 by adding to the number of the training nodes. As the test nodes, we used all nodes except training and validation nodes.
Hardware Specification Yes We used Pytorch (Paszke et al., 2017) and Pytorch Geometric (Fey & Lenssen, 2019) for the experiments on an NVIDIA GTX 2080 Ti GPU with 11GB of memory.
Software Dependencies Yes We used Pytorch (Paszke et al., 2017) and Pytorch Geometric (Fey & Lenssen, 2019) for the experiments...
Experiment Setup Yes By grid search on each validation set, learning rates of all experiments are chosen from {0.01, 0.005, 0.001, 0.0001}, and dropout (Srivastava et al., 2014) was applied with p selected in {0.0, 0.25, 0.5}. We set the number of layers to 3, and We fix dropout rate p = 0.5. The hidden dimension was set to 64 for all datasets except OGBN-Arxiv where 256 is used.