Fast Online Node Labeling for Very Large Graphs

Authors: Baojian Zhou, Yifan Sun, Reza Babanezhad Harikandeh

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct extensive experiments on the online node classification for different-sized graphs and compare FASTONL with baselines. We address the following questions: 1) Do these parameterized kernels work and capture label smoothness?; 2) How does FASTONL compare in terms of classification accuracy and run time with baselines?
Researcher Affiliation Collaboration 1School of Data Science, Fudan University, Shanghai, China 2the Shanghai Key Laboratory of Data Science, Fudan University 3Department of Computer Science, Stony Brook University, Stony Brook, USA 4Samsung-SAIT AI lab, Montreal, Canada.
Pseudocode Yes Algorithm 1 RELAXATION(G, λ, D)(Rakhlin & Sridharan)
Open Source Code Yes Our code and datasets have been provided as supplementary material and are publicly available at https://github. com/baojian/Fast ONL.
Open Datasets Yes We collect ten graph datasets where nodes have true labels (Tab. 4) and create one large-scale Wikipedia graph where chronologically-order node labels are from ten categories of English Wikipedia articles.
Dataset Splits No The paper does not explicitly provide training/validation/test dataset splits (e.g., percentages or specific counts) for reproducibility, nor does it specify cross-validation settings. It discusses parameters and sequential sample processing.
Hardware Specification No All experiments are conducted on a server with 40 cores and 250GB of memory.
Software Dependencies No We implemented all methods using Python and used the inverse function of scipy library to compute the matrix inverse.
Experiment Setup Yes All experimental setups, including parameter tuning, are further discussed in Appendix D. For FASTONL, we chose the first two kernels defined in Tab. 1 and named them as FASTONL-K1 and FASTONL-K2, respectively.