Fully Dynamic Consistent Facility Location

Authors: Vincent Cohen-Addad, Niklas Oskar D. Hjuler, Nikos Parotsidis, David Saulpic, Chris Schwiegelshohn

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement our analysis with experiments showing that the cost of the solution maintained by our algorithm at any time t is very close to the cost of a solution obtained by quickly recomputing a solution from scratch at time t while having a much better running time.
Researcher Affiliation Academia Vincent Cohen-Addad, CNRS & Sorbonne Universit e vcohen@di.ens.fr Niklas Hjuler, University of Copenhagen hjuler@di.ku.dk Nikos Parotsidis, University of Copenhagen nipa@di.ku.dk David Saulpic, Ecole normale sup erieure Sorbonne Univerist e david.saulpic@lip6.fr Chris Schwiegelshohn Sapienza University of Rome schwiegelshohn@diag.uniroma1.it
Pseudocode Yes Input: An integer L, a set of points X Output: A set of centers R, an assignment φ of point to centers, and tl the id of the last center opened 1: Let R and x1, ..., x|X| be a random order on the points of X 2: for all i {1, ..., |X|} do 3: if |R| < 4k (1 + log n ) (22p+2 + 1) then 4: add xi to R with probability d(xi,R)pk(1+log n) L 5: if |R| = 4k (1 + log n ) (22p+2 + 1) then 6: tl i 7: end if 8: end if 9: φ(xi) arg min c R {d(xi, c)p} 10: end for (a) Meyerson Capped(L, X)
Open Source Code No The paper states 'All our codes are written in Python.' but does not provide a link to a code repository or an explicit statement that the code for the described methodology is publicly available.
Open Datasets Yes The Twitter data set [23], considered by [24], consists of 4.5 million geotagged tweets in 3 dimensions (longitude, latitude, timestamp). We restricted our experiments to the first 200K tweets. The COVERTYPE data set [5], considered by [30], from the UCI repository with 581K points and 54 dimensions. We restricted our experiments to the first 100K points and 10 dimensions (the ones we believed to be appropriate for an Euclidan metric). The USCensus1990 data set [32] from the UCI repository has 69 dimensions and 2.5 million points. We restricted our experiments to the first 30K points.
Dataset Splits No The paper mentions restricting the number of points used from datasets but does not provide details on specific training, validation, or test splits. There is no mention of cross-validation either.
Hardware Specification Yes The experiments were executed on a Windows 10 machine with processor: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz, 3701 Mhz, 6 cores, 12 Logical processors, and 16 GB RAM.
Software Dependencies No The paper states 'All our codes are written in Python.' but does not provide specific version numbers for Python or any other software libraries or dependencies used in the experiments.
Experiment Setup Yes For COVERTYPE and USCensus1990, we used a window size of 5000 points and a facility cost of 0.5; for Twitter, the window size was 10000 and the facility cost 0.004.