Online Selection Problems against Constrained Adversary
Authors: Zhihao Jiang, Pinyan Lu, Zhihao Gavin Tang, Yuhao Zhang
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical Experiments. We solve the optimization on a standard PC with the KNITRO solver. We compare the result of competitive ratios for different accuracies of the N-step threshold algorithm to the algorithm in adversary setting (Refer to Alg. 1). See the following table for the competitive ratios for various values of N and . |
| Researcher Affiliation | Academia | 1Department of Management Science and Engineering, Stanford University, Stanford, California, USA 2School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai, China 3John Hopcroft Center for Computer Science, Shanghai Jiao Tong University, Shanghai, China. |
| Pseudocode | Yes | Algorithm 1 Online Singlatioe Item Selection with Prediction |
| Open Source Code | No | The paper does not contain any statements about making source code publicly available or links to code repositories. |
| Open Datasets | No | The paper performs numerical experiments to evaluate theoretical competitive ratios rather than empirical studies on specific datasets. Therefore, the concept of a publicly available 'train' dataset is not applicable to this work. |
| Dataset Splits | No | The paper conducts numerical experiments on theoretical competitive ratios and does not involve dataset splits (training, validation, test) for model evaluation. |
| Hardware Specification | No | The paper states, 'We solve the optimization on a standard PC,' but provides no specific hardware details such as CPU/GPU models, memory, or other specifications. |
| Software Dependencies | No | The paper mentions using the 'KNITRO solver' but does not specify a version number or any other software dependencies with version information. |
| Experiment Setup | No | The paper describes parameters for its numerical optimization (e.g., N-step functions for h, accuracy values for epsilon) but does not provide specific experimental setup details such as hyperparameters (learning rates, batch sizes), model initialization, or training schedules typically found in machine learning experiments. |