Linear Uncertainty Quantification of Graphical Model Inference
Authors: Chenghua Guo, Han Yu, Jiaxin Liu, Chao Chen, Qi Li, Sihong Xie, Xi Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, we demonstrate that Lin UProp is consistent with the sampling-based method but with linear scalability and fast convergence. Moreover, Lin UProp outperforms competitors in uncertainty-based active learning on four real-world graph datasets, achieving higher accuracy with a lower labeling budget. |
| Researcher Affiliation | Academia | 1Key Laboratory of Trustworthy Distributed Computing and Service (Mo E), Beijing University of Posts and Telecommunications, China 2College of Artificial Intelligence, Beijing University of Posts and Telecommunications, China 3Department of Computer Science and Engineering, Lehigh University, USA 4School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), China 5Department of Computer Science, Iowa State University, USA 6AI Thrust, The Hong Kong University of Science and Technology (Guangzhou), China |
| Pseudocode | No | No clearly labeled 'Pseudocode' or 'Algorithm' block, or structured steps formatted like code or an algorithm, was found. |
| Open Source Code | Yes | Project page including code at https://github.com/chenghuaguo/Lin UProp |
| Open Datasets | Yes | We validate the properties of Lin UProp on three popular citation networks (Cora, Citeseer, and Pub Med) [18] and a political blog hyperlink network (Pol Blogs) [1]. |
| Dataset Splits | Yes | Table 1: Statistical information and partitioning of datasets. The subsets, Vtrain, Vval and Vtest are sampled from the original node set. The remaining nodes are in Vulp. We use these subsets in the effectiveness experiments. |
| Hardware Specification | Yes | We conducted convergence and scalability experiments on the Apple M2 chip, and convergence experiments on a 2.2 GHz Intel Xeon CPU. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers for key components (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We set node priors based on classification type: Beta distributions for binary and Dirichlet distributions for multi-class (k classes), both using parameter vector α of length k. In datasets, 30% of nodes are randomly labeled; if labeled as class i, αi = 10, otherwise, entries are 1. Unlabeled nodes have α = 1. Prior interval widths for each class are twice the standard deviation of the distribution, capturing uncertainty by representing the interval as mean std. |