Anti-differentiating approximation algorithms:A case study with min-cuts, spectral, and flow

Authors: David Gleich, Michael Mahoney

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we present several empirical results that illustrate our theory from Section 3. We begin with a few pedagogical examples in order to state solutions of these problems that are correctly normalized. These precise values are sensitive to small differences and imprecisions in solving the equations. We state them here so that others can verify the correctness of their future implementations although the codes underlying our computations are also available for download.4 We then illustrate how the 2-norm Page Rank vector (Prob. (4)) and 1-norm regularized vector (Prob. (6)) approximate the 1-norm min-cut problem (Prob. (2)) for a small but widely-studied social graph.
Researcher Affiliation Academia David F. Gleich dgleich@purdue.edu Computer Science, Purdue University, West Lafayette, IN 47906 Michael W. Mahoney mmahoney@icsi.berkeley.edu International Computer Science Institute and Dept. of Statistics, University of California at Berkeley, Berkeley, CA 94720
Pseudocode Yes 1. x(1) = 0, r(1) = (1 β)ei, k = 1 2. while any rj > τd j d j is the degree of node j 3. x(k+1) = x(k) + (rj τd jρ)ej 4. r(k+1) i = τdjρ i = j r(k) i + β(r j τdjρ)/d j i j r(k) i otherwise 5. k k + 1
Open Source Code Yes The codes underlying our computations are also available for download.4 http://www.cs.purdue.edu/homes/dgleich/codes/l1pagerank
Open Datasets Yes Consider, for instance, Newman’s netscience graph (Newman, 2006), which has 379 nodes and 914 undirected edges.
Dataset Splits No The paper does not provide specific details on training, validation, or test dataset splits. It focuses on analyzing graph properties rather than a typical machine learning training/evaluation pipeline with distinct data splits.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes A summary of these results may be found in Table 1. To orient the reader about the normalizations (which can be tricky), we present the three Page Rank vectors that satisfy the relationships xpr = Dz and vol(S )z = x(α, S ), as predicted by the theory from Section 2 and Theorem 1. Note that z, x(α, S ) and z G round to the discrete solution produced by the exact cut if we simply threshold at about half the largest value. Thus, and not surprisingly, all these formulations reveal to a first approximation similar properties of the graph. Importantly, though, note also that the vector z G has the same sparsity as the true cut-vector. This is a direct consequence of the implicit ℓ1 regularization that we characterize in Theorem 3. Understanding these details matters critically as we proceed to apply these methods to larger graphs where there may be many small entries in the Page Rank vector z.5 ... Here, we set α = 1/2, τ = 10 2, ρ = 1, β = 2/3, and κ = 0.315.