LogSpecT: Feasible Graph Learning Model from Stationary Signals with Recovery Guarantees

Authors: Shangyuan LIU, Linglingzhi Zhu, Anthony Man-Cho So

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive numerical results on both synthetic and real networks not only corroborate the stability of our proposed methods, but also highlight their comparable and even superior performance than existing models. In this section, we evaluate our proposed algorithm and models via numerical experiments. The experiments are conducted on both synthetic and real networks. We apply the standard metrics [18]: F-measure, Precision, and Recall to assess the quality of learned graphs.
Researcher Affiliation Academia Shangyuan Liu The Chinese University of Hong Kong shangyuanliu@link.cuhk.edu.hk Linglingzhi Zhu The Chinese University of Hong Kong llzzhu@se.cuhk.edu.hk Anthony Man-Cho So The Chinese University of Hong Kong manchoso@se.cuhk.edu.hk
Pseudocode No The paper describes the steps of the L-ADMM algorithm but does not present them in a formal pseudocode or algorithm block.
Open Source Code Yes The source code is available at: https://github.com/StevenSYL/NeurIPS2023-LogSpecT.
Open Datasets Yes In this set of experiments, we compare the performance of Log Spec T (resp. r Log Spec T) with Spec T (resp. r Spec T) and other methods from the statistics and GSP communities on Protein database and Reddit database from [6]. We choose graphs in databases whose numbers of nodes are smaller than 50 and generate stationary graph signals from the generative model (1)... USPS dataset is a handwritten digit dataset. As shown in [25], it is nearly stationary with respect to the nearest neighbour graph.
Dataset Splits No To choose the threshold ε, we either use a training-based strategy or a searching-based strategy. The training-based one searches the best ε on k graphs and applies the value to the newly learned graph. We rely on the training-based strategy to obtain the best threshold ε from 10 randomly chosen training graphs. The paper does not specify the exact split percentages or counts for training/validation/test sets across its experiments, nor does it refer to standard splits with citations.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments.
Software Dependencies No We compare our L-ADMM with CVXPY [8]. The solver used in CVXPY is MOSEK. The paper mentions software tools but does not specify their version numbers.
Experiment Setup Yes The parameter δ is set as 10 p log n/n. For L-ADMM algorithm, the target accuracy is set as 10 6 and the initialization is set as zero. The ER graphs are generated by placing an edge between each pair of nodes independently with probability p = 0.2 and the weight on each edge is set to 1. The input graph signal w N(0, Im) is a random vector following the normal distribution. In this experiment, the sample size n is chosen from 10 to 106 and δn is set as 0.2 p log n/n. The parameter δn in r Log Spec T is set as 10 p log n/n and in r Spec T it is set as the smallest value that allows a feasible solution [31].