Latent Graph Inference with Limited Supervision
Authors: Jianglin Lu, Yi Xu, Huan Wang, Yue Bai, Yun Fu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on representative benchmarks demonstrate that reducing the starved nodes consistently improves the performance of state-of-the-art LGI methods, especially under extremely limited supervision (6.12% improvement on Pubmed with a labeling rate of only 0.3%). |
| Researcher Affiliation | Academia | Jianglin Lu1 Yi Xu1 Huan Wang1 Yue Bai1 Yun Fu1,2 1Department of Electrical and Computer Engineering, Northeastern University 2Khoury College of Computer Science, Northeastern University |
| Pseudocode | No | The paper describes methods in text and uses mathematical theorems and proofs, but it does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Project Page: https://jianglin954.github.io/LGI-LS/ |
| Open Datasets | Yes | Following the common settings of existing LGI methods [4, 10, 13, 46], we conduct experiments on four well-known benchmarks: Cora, Citeseer, Pubmed [19,34], and ogbn-arxiv [14]. |
| Dataset Splits | Yes | To test the performance under different labeling rates, for the Cora and Citeseer datasets, we add half of the validation samples to the training sets, resulting in Cora390 and Citeseer370, where the suffix number represents the total number of labeled nodes. |
| Hardware Specification | No | The paper discusses computational efficiency and FLOPs but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using public code repositories for baselines and states that hyperparameters follow the baselines, but it does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We select the values of τ and α from the sets {10, 15, 20, 25, 30, 50} and {0.01, 0.1, 1.0, 10, 50, 100}, respectively. For other hyperparameters such as learning rate and weight decay, we follow the baselines and use the same settings. |