Putting the “Learning" into Learning-Augmented Algorithms for Frequency Estimation

Authors: Elbert Du, Franklyn Wang, Michael Mitzenmacher

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the theoretical results with synthetic experiments. In what follows, there are n = 10^6 keys, and the frequency of the kth key is n/k. We consider three different scenarios: No Screening, Perfect Screening, Imperfect Screening. ... We use the CAIDA dataset.
Researcher Affiliation Academia 1Department of Mathematics, Harvard University 2Department of Computer Science, Harvard University. Correspondence to: Franklyn Wang <franklyn wang@college.havard.edu>, Elbert Du <edu@college.havard.edu>.
Pseudocode Yes We reproduce the algorithm of (Hsu et al., 2019) below in Algorithm 1.
Open Source Code Yes Our source code can be found at https://github.com/franklynwang/ putting-the-learning-in-LAA.
Open Datasets Yes Following (Hsu et al., 2019), we use the CAIDA dataset.
Dataset Splits Yes We use the first 7 minutes of the link for training, the 8th minute for validation, and the 9th, 30th, and 60th minutes for testing.
Hardware Specification Yes We use 1 NVIDIA V100 for each run.
Software Dependencies No The paper mentions using "an LSTM" and "Adam" as the optimizer, but it does not specify software versions (e.g., PyTorch version, Python version, CUDA version, etc.).
Experiment Setup Yes Each training run takes around 12 to 18 hours, and trains for 200 epochs with batch size equal to 1024. For the optimizer, we use Adam with learning rate 10^-3.