Optimal Sequential Maximization: One Interview is Enough!

Authors: Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4. Experiments In this section, we compare the performance of various sequential maximization algorithms SEQELIMINATE (Falahatgar et al., 2017a), AGNOSTIC-SEQ and OPT-AGNOSTIC-SEQ.
Researcher Affiliation Collaboration 1Apple Inc. 2University of California, San Diego.
Pseudocode Yes Algorithm 1 ASYMMETRIC-THRESHOLD (A-T), Algorithm 2 OPTIMAL-SEQUENTIAL (O-S), Algorithm 3 OPT-ANCHOR-UPDATE, Algorithm 4 OPT-AGNOSTIC-SEQ
Open Source Code No The paper does not provide any specific links to source code repositories or explicit statements indicating the availability of the code for its described methodology.
Open Datasets No The paper describes generating data based on models (e.g., 'all items are essentially equal i.e., pi,j = 1/2 i, j' and 'pi,j = 0.6 i < j') for its experiments. It does not mention using or providing access information for any publicly available or open dataset.
Dataset Splits No The paper describes experiments on synthetic data models and does not provide specific train/validation/test dataset splits, percentages, or absolute sample counts required for reproduction.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory) used to conduct the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CPLEX versions).
Experiment Setup Yes In all the experiments in this section, we try to find an 0.05-maximum with δ = 0.1. All results are averaged over 100 runs. and We first consider the model where all items are essentially equal i.e., pi,j = 1/2 i, j. and We now consider the model where pi,j = 0.6 i < j