Generalized Matrix Means for Semi-Supervised Learning with Multilayer Graphs

Authors: Pedro Mercado, Francesco Tudisco, Matthias Hein

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the analysis in expectation with extensive experiments with random graphs, showing that our approach compares favorably with state of the art methods, yielding a good classification performance on several relevant settings where state of the art approaches fail. Finally, we perform numerical experiments on real world datasets and verify that our approach is competitive to state of the art approaches.
Researcher Affiliation Academia 1University of Tübingen, Germany 2Gran Sasso Science Institute, Italy
Pseudocode No The paper describes a three-step numerical scheme in Section 5, but it is presented as descriptive text rather than structured pseudocode or an algorithm block.
Open Source Code No The paper refers to implementations by other authors (e.g., "We follow the authors implementation: http://pages.cs.wisc.edu/~jerryzhu/pub/harmonic_function.m", "code released by the authors: https://github.com/egujr001/SMACD"). While it states "For TLMV[33] and SGMI we use our own implementation", it does not explicitly state that their code for the proposed method is released or publicly available.
Open Datasets Yes We consider the following datasets: 3-sources [16], which consists of news articles that were covered by news sources BBC, Reuters and Guardian; BBC[7] and BBC Sports[8] news articles, a dataset of Wikipedia articles with ten different classes [24], the hand written UCI digits dataset with six different set of features, and citations datasets Cite Seer[17], Cora[18] and Web KB(Texas)[5].
Dataset Splits Yes We consider different amounts of labeled nodes: going from 1% to 50% (y-axis). The percentage of labeled nodes per class is in the range {1%, 5%, 10%, 15%, 20%, 25%}.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions software for baseline methods like 'http://pages.cs.wisc.edu/~jerryzhu/pub/harmonic_function.m' and 'https://github.com/egujr001/SMACD' but does not provide specific version numbers for these or any other ancillary software components used in their experiments.
Experiment Setup Yes We fix nearest neighbourhood size to k = 10 and generate 10 samples of labeled nodes, where the percentage of labeled nodes per class is in the range {1%, 5%, 10%, 15%, 20%, 25%}. Finally we set parameters for TSS to (c = 10, c0 = 0.4), SMACD (λ = 0.01)2, TLMV (λ = 1), SGMI (λ1 = 1, λ2 = 10 −3) and λ = 0.1 for L1 and λ = 10 for L −1 and L −10. We do not perform cross validation in our experimental setting due to the large execution time in some of the methods here considered. Hence we fix the parameters for each method in all experiments.