Online Learning for Active Cache Synchronization

Authors: Andrey Kolobov, Sebastien Bubeck, Julian Zimmert

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 8, we compare the two algorithm empirically. The goal of our experiments was to evaluate (1) the relative performance of MIRRORSYNC and ASYNCMIRRORSYNC... Figures 1 and 2 compare the performance of MIRRORSYNC vs. ASYNCMIRRORSYNC and ASYNCMIRRORSYNC vs. ASYNCPSGDSYNC, respectively. The plots were obtained by running the respective pairs of algorithms on 150 problem instances generated as above, measuring the policy cost J after every update, and averaging the resulting curves across these 150 trials.
Researcher Affiliation Industry Andrey Kolobov 1 S ebastien Bubeck 1 Julian Zimmert 2 1Microsoft Research, Redmond 2Google Research, Berlin.
Pseudocode Yes Algorithm 1: MIRRORSYNC; Algorithm 2: ASYNCMIRRORSYNC
Open Source Code Yes The implementation, available at https://github.com/microsoft/Optimal-Freshness-Crawl Scheduling, relies on scipy.optimize.minimize for convex constrained optimization in order to update the play rates r (lines 20, 42 of Alg. 1, line 33 of Alg. 2).
Open Datasets No For all problem instances in the experiments, rmin, rmax, B, and K were as in Table 1 in Appendix B. The set {ck(τ)}K k=1 of cost-generating processes was constructed randomly for each instance. In all problem instances, each arm had a distribution over time-dependent cost functions in the form of polynomials ck(τ) = akτ pk, where pk (0, 1) was chosen at instance creation time and ak sampled at run time as described in Appendix B.
Dataset Splits No The paper does not explicitly specify train/validation/test dataset splits. It describes generating synthetic problem instances and evaluating algorithms over 'rounds' or 'simulated time'.
Hardware Specification Yes The experiments were performed on a Windows 10 laptop with 32GB RAM with 8 Intel 2.3GHz i9-9980H CPU cores.
Software Dependencies No The implementation, available at https://github.com/microsoft/Optimal-Freshness-Crawl Scheduling, relies on scipy.optimize.minimize for convex constrained optimization in order to update the play rates r (lines 20, 42 of Alg. 1, line 33 of Alg. 2). No specific version numbers for scipy or other software dependencies are provided.
Experiment Setup Yes Running the algorithms required choosing values for the following parameters: Learning rate η. Length lupd round of intervals between ASYNCMIRRORSYNC s and ASYNCPSGDSYNC s play rate updates. Exploration parameter ε. For comparing relative convergence properties of MIRRORSYNC, ASYNCMIRRORSYNC, and ASYNCPSGDSYNC, we fixed ε = 0.05 for all of them.