Inferring Networks From Random Walk-Based Node Similarities
Authors: Jeremy Hoskins, Cameron Musco, Christopher Musco, Babis Tsourakakis
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of these methods on a number of social networks obtained from Facebook. We study theoretically and empirically what can be learned about a graph given a subset of potentially noisy effective resistance estimates. We next present an experimental study of how well our methods can learn a graph given a set of (noisy) effective resistance measurements. |
| Researcher Affiliation | Collaboration | Jeremy G. Hoskins Department of Mathematics Yale University New Haven, CT jeremy.hoskins@yale.edu Cameron Musco Microsoft Research Cambridge, MA camusco@microsoft.com Christopher Musco Department of Computer Science Princeton University Princeton, NJ cmusco@cs.princeton.edu Charalampos E. Tsourakakis Department of Computer Science Boston University & Harvard University Boston, MA ctsourak@bu.edu |
| Pseudocode | No | The paper describes algorithms but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/cnmusco/graph-similarity-learning. |
| Open Datasets | Yes | We also consider Facebook ego networks obtained from the Stanford Network Analysis Project (SNAP) collection [33, 34]. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing data partitions. |
| Hardware Specification | Yes | All experiments were run on a laptop with a 2.6 GHz Intel Core i7 processor and 16 GB of main memory. |
| Software Dependencies | No | The paper mentions "MOSEK convex optimization software, accessed through the CVX interface", but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For Problem 2 we implemented gradient decent based on the closed-from gradient calculation in Proposition 1. Line search was used to optimize step size at each iteration. For larger problems, block coordinate descent was used as described in Section 3.2, with the coordinate set chosen uniformly at random in each iteration. We set the block size |B| = 5000. |