Private Graph All-Pairwise-Shortest-Path Distance Release with Improved Error Rate

Authors: Chenglin Fan, Ping Li, Xiaoyun Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical examples are also provided. Additionally, we also propose a DP algorithm with error rate O(k), which improves the error of general graphs, when the graph has small feedback vertex set number k = o(n1/2). (Abstract) ... In Figure 1, we present the empirical largest absolute error over all pairwise distances against the size of graph n. For the theoretical bounds, we start with the first empirical error (black) associated with n = 100, and increase at the rate of n (dash) and n1/2 log2 n (dotted), respectively. We validate that the empirical error grows with rate slower than n1/2 log2 n, as given by our theory, for all ϵ values. (Section 3.3 Numerical Example)
Researcher Affiliation Industry Chenglin Fan, Ping Li, Xiaoyun Li Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue, WA 98004, USA {chenglinfan2020, pingli98, lixiaoyun996}@gmail.com
Pseudocode Yes Algorithm 1: Private all pairwise shortest paths distance release. ... Algorithm 2: Differentially private all pairwise shortest path distances release for graph with small feedback vertex number.
Open Source Code No The paper states, 'As will be shown in the paper, this allows us to simply publish the constructed synthetic graph, upon which standard APSP computation leads to the desired error level.' (Section 1.3 More Related Work). However, it does not explicitly provide a link or state that the source code for their method is open-source or available.
Open Datasets No The paper states: 'We simulate a multi-stage graph, where each edge weight is generated from i.i.d. uniform distribution.' (Section 3.3 Numerical Example). This indicates a synthetic dataset, but no access information (link, citation for public availability) is provided for it.
Dataset Splits No The paper does not provide specific details about dataset splits (e.g., train/validation/test percentages or counts) for reproduction. It uses a simulated graph for numerical examples.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup Yes We simulate a multi-stage graph, where each edge weight is generated from i.i.d. uniform distribution. The graph contains concatenated blocks, each with 1 start node, 1 end node, and 9 nodes in the middle connected to both the start node and the end node. The end node then becomes the start node of the next block. (Section 3.3 Numerical Example) ... Left: each edge weight is Unif(2000, 3000). Right:edge weight from Unif(104, 105). Averaged over 200 repetitions. (Figure 1 caption) ... ϵ = ϵ/2. (Algorithm 1) ... ϵ = ϵ/3. (Algorithm 2)