The Primal-Dual method for Learning Augmented Algorithms

Authors: Etienne Bamas, Andreas Maggiori, Ola Svensson

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present experimental results that confirm the theoretical analysis of Algorithm 6 for the TCP acknowledgement problem. The code is publicly available at https://github.com/etienne4/ PDLA. We experiment on various types of distribution for packet arrival inputs.
Researcher Affiliation Academia Etienne Bamas EPFL, Lausanne, Switzerland etienne.bamas@epfl.ch Andreas Maggiori EPFL, Lausanne, Switzerland andreas.maggiori@epfl.ch Ola Svensson EPFL, Lausanne, Switzerland ola.svensson@epfl.ch
Pseudocode Yes Algorithm 1 PRIMAL DUAL METHOD FOR ONLINE WEIGHTED SET COVER [1]. (...) Algorithm 2 PDLA FOR ONLINE WEIGHTED SET COVER. (...)
Open Source Code Yes The code is publicly available at https://github.com/etienne4/ PDLA.
Open Datasets No In all our instances, we set the subdivision parameter d to 100 which means that every second is split into 100 time units. Then we define an array of length 1000 where the i-th entry defines how many requests arrive at the i-th time step. Each entry in the array is drawn independently from the others from a distribution D. In the case of a Poisson distribution, we set D = P(1) (the Poisson distribution of mean 1). For the Pareto distribution, we choose D to be the Lomax distribution (which is a special case of Pareto distribution) with shape parameter set to 2 ([29]).
Dataset Splits No The paper describes input distributions and a method for generating noisy predictions but does not specify traditional training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for its experiments.
Software Dependencies No The paper mentions that code is publicly available but does not list specific software dependencies with version numbers.
Experiment Setup Yes In all our instances, we set the subdivision parameter d to 100 which means that every second is split into 100 time units. Then we define an array of length 1000 where the i-th entry defines how many requests arrive at the i-th time step. Each entry in the array is drawn independently from the others from a distribution D. (...) We introduce a replacement rate p [0, 1]. Then we go through the instance generated according to some distribution D and for each each entry at index 1 i 1000, with probability p we set this entry to 0 (i.e. we delete this entry) and with probability p we add to this entry a random variable Y D. (...) We then test our algorithm with 4 different values of robustness parameter λ {1, 0.8, 0.6, 0.4}.