Competitive Caching with Machine Learned Advice

Authors: Thodoris Lykouris, Sergei Vassilvtiskii

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We complement our results with an empirical evaluation of our algorithm on real world datasets, and show that it performs well empirically even using simple off-the-shelf predictions.
Researcher Affiliation Collaboration 1Cornell University, Ithaca, NY, USA 2Google Research, New York, NY, USA. Correspondence to: Thodoris Lykouris <teddlyk@cs.cornell.edu>, Sergei Vassilvitskii <sergeiv@google.com>.
Pseudocode Yes Algorithm 1 Predictive Marker with oracle-based and random tie-breaking based on clean chains
Open Source Code No The paper does not contain an explicit statement that the authors are releasing their source code for the methodology, nor does it provide a link to a code repository.
Open Datasets Yes BK is data extracted from Bright Kite, a now defunct social network... This dataset is publicly available at (Cho et al., 2011; Bri). Citi is data extracted from Citi Bike... The dataset is publicly available at (Cit).
Dataset Splits No The paper does not provide specific train/validation/test dataset splits. It mentions using real-world datasets but does not detail how these datasets were partitioned for evaluation or training purposes.
Hardware Specification No The paper does not specify the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper does not mention any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or solvers).
Experiment Setup No The paper mentions setting 'k' (cache size) for experiments (e.g., "We set k = 10", "k = 100"). However, it does not provide comprehensive experimental setup details such as hyperparameters, optimizer settings, training configurations, or system-level settings typically found in machine learning experiments.