Computing (1+epsilon)-Approximate Degeneracy in Sublinear Time

Authors: Valerie King, Alex Thomo, Quinton Yong

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on massive real-world web graphs show that our algorithm performs significantly faster than previous methods for computing degeneracy. We compare our solution to the (1 + )-approximate algorithm of [Farach-Colton and Tsai, 2016] and the Semi Deg+ algorithm from [Li et al., 2022] on massive real-world webgraphs and show that our algorithm is faster than both on all datasets.
Researcher Affiliation Academia Valerie King , Alex Thomo , Quinton Yong University of Victoria, Victoria, Canada {val, thomo, quintonyong}@uvic.ca
Pseudocode Yes Algorithm 1 Approximate Degeneracy(G, ϵ, c)
Open Source Code Yes An implementation of our method is available at https://github. com/Quinton Yong/NESD
Open Datasets Yes We use the datasets twitter-2010, sk-2005, gsh-2015, uk2014, and clueweb12 from the Laboratory of Web Algorithmics [Boldi and Vigna, 2004; Boldi et al., 2011; Boldi et al., 2014], and the characteristic of each graph is illustrated in Table 1.
Dataset Splits No The paper does not provide specific training/validation/test dataset splits. The experiments involve running a graph algorithm on entire graphs, not machine learning models with traditional data splits.
Hardware Specification Yes the experiments were run on Amazon EC2 instances with the r5.24xlarge instance size (96 v CPU, 768GB memory) running Linux.
Software Dependencies Yes The algorithms used in the experiments were implemented in Java 11
Experiment Setup Yes We experiment with four different values of , namely 0.5, 0.25, 0.1, and 0.05, to illustrate the varying approximation accuracies of our algorithm. The value of c did not have a major impact on the results since the probability of correctness is high with n being large in all of the datasets. So, we use c = 0.5 for all experiments.