Spectrally Approximating Large Graphs with Smaller Graphs

Authors: Andreas Loukas, Pierre Vandergheynst

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As a proof of concept, Figure 2 compares the actual constants ϵk with the bound of Theorem 3.1 when utilizing REC with a heavy-edge potential to coarsen the following benchmark graphs: (i) a point cloud representing a bunny obtained by re-sampling the Stanford bunny 3Dmesh (Turk & Levoy, 1994) and applying a k-nn construction (N = 1000, r = 0.4, k = 30), (ii) a k-nn similarity graph capturing the geometry of a 2D manifold usually referred to as Swiss roll (N = 1000, r = 0.4, k = 10), (iii) A network describing the interaction of yeast proteins (Rossi & Ahmed, 2015) (N = 1458, r = 0.25, davg = 2, dmax = 56), and (iv) a d-regular graph (N = 400, r = 0.4, d = 20). and Figure 4 depicts the growth of the actual relative error with r. The particular experiment corresponds to a clustering problem involving N = 1000 images, each depicting a selected digit between 0 and 4 from the MNIST database (i.e., K = 5).
Researcher Affiliation Academia 1 Ecole Polytechnique F ed erale Lausanne, Switzerland. Correspondence to: Andreas Loukas <andreas.loukas@epfl.ch>.
Pseudocode Yes Algorithm 1 Randomized Edge Contraction (REC)
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository.
Open Datasets Yes As a proof of concept, Figure 2 compares the actual constants ϵk with the bound of Theorem 3.1 when utilizing REC with a heavy-edge potential to coarsen the following benchmark graphs: (i) a point cloud representing a bunny obtained by re-sampling the Stanford bunny 3Dmesh (Turk & Levoy, 1994) and applying a k-nn construction (N = 1000, r = 0.4, k = 30), (ii) a k-nn similarity graph capturing the geometry of a 2D manifold usually referred to as Swiss roll (N = 1000, r = 0.4, k = 10), (iii) A network describing the interaction of yeast proteins (Rossi & Ahmed, 2015) (N = 1458, r = 0.25, davg = 2, dmax = 56), and (iv) a d-regular graph (N = 400, r = 0.4, d = 20). and Figure 4 depicts the growth of the actual relative error with r. The particular experiment corresponds to a clustering problem involving N = 1000 images, each depicting a selected digit between 0 and 4 from the MNIST database (i.e., K = 5).
Dataset Splits No The paper mentions using the MNIST database and other benchmark graphs but does not specify exact dataset split percentages, absolute sample counts for each split, or reference predefined standard splits with citations that include such details.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU models, CPU models, or cloud computing specifications.
Software Dependencies No The paper does not provide specific software dependency details, such as library or solver names with their version numbers, needed to replicate the experiments.
Experiment Setup Yes As a proof of concept, Figure 2 compares the actual constants ϵk with the bound of Theorem 3.1 when utilizing REC with a heavy-edge potential to coarsen the following benchmark graphs: (i) a point cloud representing a bunny obtained by re-sampling the Stanford bunny 3Dmesh (Turk & Levoy, 1994) and applying a k-nn construction (N = 1000, r = 0.4, k = 30), (ii) a k-nn similarity graph capturing the geometry of a 2D manifold usually referred to as Swiss roll (N = 1000, r = 0.4, k = 10), (iii) A network describing the interaction of yeast proteins (Rossi & Ahmed, 2015) (N = 1458, r = 0.25, davg = 2, dmax = 56), and (iv) a d-regular graph (N = 400, r = 0.4, d = 20). and The particular experiment corresponds to a clustering problem involving N = 1000 images, each depicting a selected digit between 0 and 4 from the MNIST database (i.e., K = 5). We constructed a 12-nearest neighbor similarity graph and repeated the experiment 10 times, each time using a different image set, selected uniformly at random. and for illustration purposes, we here additionally perform t = {2, 10} steps of a simple power iteration scheme, yielding an O(t M(K 1)) overhead.