When to Reset Your Keys: Optimal Timing of Security Updates via Learning

Authors: Zizhan Zheng, Ness Shroff, Prasant Mohapatra

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate the advantages of our learning algorithms through numerical study. The results are averaged over 100 independent trials and are given in Figure 2.
Researcher Affiliation Academia Zizhan Zheng Department of Computer Science Tulane University Ness B. Shroff Dept. of ECE and CSE The Ohio State University Prasant Mohapatra Department of Computer Science University of California, Davis
Pseudocode Yes Algorithm 1 Improved UCB algorithm for time-associative bandits with side observations
Open Source Code No The paper does not provide a link to open-source code for the methodology nor does it explicitly state that the code is released.
Open Datasets No We use the following synthetic dataset. We assume that the attack time at follows an i.i.d. Weibull Distribution with CDF F(a) = 1 e (a/λ)k for a 0 and F(a) = 0 for a < 0. No access details are provided.
Dataset Splits No The paper does not provide specific training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific hardware details used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes In each trial, λ is chosen from the interval [1, 20] uniformly at random. We consider a 19 arm setting with xi evenly distributed in [1, 10] with a step size of 0.5. We consider both the binary loss function and the linear loss function mentioned in the model section. In both cases, we fix the defense cost to cd = 0.1.