Online Algorithms for Rent-Or-Buy with Expert Advice
Authors: Sreenivas Gollapudi, Debmalya Panigrahi
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test the efficacy of our algorithms via simulations. We set the buying cost b = 1. (The actual value of b is unimpor-tant because we can scale all values by b.) We choose the actual outcome x to be a value uniformly drawn from [0, 2b]. We vary the number of experts from 1 to 8 and set their associated predictions to x + ϵ where ϵ is drawn from a normal distribution of mean 0 and standard deviation σ. To verify consistency and robustness of our algorithms, we vary σ from 0 to 2. Finally, for the algorithm in Fig. 4, we consider values of 0.1, 0.5, and 0.9 for the meta parameter λ. We label the algorithm defined in Figure 1 consistent; it s extension to handle non-zero prediction errors (see Section 3) as robust; and the robust and consistent algorithm in Section 4 as hybrid. Figure 3 illustrates the relative performance of our algorithms. We make three observations. |
| Researcher Affiliation | Collaboration | 1Google Research 2Department of Computer Science, Duke University. |
| Pseudocode | Yes | Figure 1. The algorithm for k experts with zero error; Figure 2. The algorithm for k experts with non-zero error; Figure 4. The hybrid algorithm for k experts |
| Open Source Code | No | No explicit statement or link providing access to the source code for the described methodology. |
| Open Datasets | No | We test the efficacy of our algorithms via simulations. We set the buying cost b = 1. ... We choose the actual outcome x to be a value uniformly drawn from [0, 2b]. We vary the number of experts from 1 to 8 and set their associated predictions to x + ϵ where ϵ is drawn from a normal distribution of mean 0 and standard deviation σ. |
| Dataset Splits | No | The paper describes simulations where data is generated for each trial, but does not specify explicit train/validation/test splits as typically used with fixed datasets. |
| Hardware Specification | No | No specific hardware details (GPU/CPU models, memory, or specific computer specifications) are mentioned for running the experiments. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned. |
| Experiment Setup | Yes | We set the buying cost b = 1. We choose the actual outcome x to be a value uniformly drawn from [0, 2b]. We vary the number of experts from 1 to 8 and set their associated predictions to x + ϵ where ϵ is drawn from a normal distribution of mean 0 and standard deviation σ. To verify consistency and robustness of our algorithms, we vary σ from 0 to 2. Finally, for the algorithm in Fig. 4, we consider values of 0.1, 0.5, and 0.9 for the meta parameter λ. |