Large Scale Graph Learning From Smooth Signals

Authors: Vassilis Kalofolias, Nathanaël Perraudin

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 EXPERIMENTS In our experiments we wish to answer questions regarding (1) the approximation quality of our model, (2) the quality of our automatic parameter selection, (3) the benefit from learning versus A-NN for large scale applications and (4) the scalability of the model.
Researcher Affiliation Academia Vassilis Kalofolias Signal Processing Laboratory 2, EPFL Station 11, 1015 Lausanne, Switzerland v.kalofolias@gmail.com Nathanaël Perraudin Swiss Data Science Center, ETH Zürich Universitätstrasse 25, 8006 Zürich, Switzerland nperraud@ethz.ch
Pseudocode Yes Algorithm 1 Solver of the one-node problem, (8).
Open Source Code Yes We provide code for our algorithm online in the GSPBox (Perraudin et al., 2014). A tutorial based on this paper can be found in https://epfl-lts2.github.io/gspbox-html/doc/demos/gsp_demo_ learn_graph_large.html.
Open Datasets Yes US Census 1990: Dataset available at the UCI machine learning repository, consists of approximately 2.5 million samples of 68 features. https://archive.ics.uci.edu/ ml/datasets/US+Census+Data+(1990)
Dataset Splits No The paper mentions using specific datasets like the MNIST training dataset (60000 images) or 1% known labels for semi-supervised learning, but does not provide explicit training, validation, and test dataset split percentages or sample counts for general reproducibility across all experiments.
Hardware Specification No The paper states that graph learning was performed 'on a desktop computer' for a 1-million-nodes graph, but does not specify any particular hardware components such as CPU model, GPU, or memory.
Software Dependencies No The paper mentions using a 'simple Matlab implementation', the 'FLANN library', 'SDPT3', 'FISTA', and 'forward-backward-based primal-dual' optimization schemes, but it does not specify version numbers for any of these software dependencies.
Experiment Setup Yes For all iterative algorithms, except A-NN, we perform a maximum of 500 iterations. To learn a graph with on average k neighbors per node (k-NN), we first compute an rk-A-NN graph and use its edges as Eallowed. For the final value of θ for a given k, we use the middle of this interval in a logarithmic scale, that is, the geometric mean of the upper and lower limits.