The Sample Complexity of Best-$k$ Items Selection from Pairwise Comparisons

Authors: Wenbo Ren, Jia Liu, Ness Shroff

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we perform experiments on the synthetic dataset with equal noise-levels (i.e., pi,j = 0.6 for all items i and j with i j) and public election datasets provided by Pref Lib (Mattei and Walsh, 2013). In the supplementary material, we present the results of the synthetic dataset with unequal noise-levels and the numerical illustrations of the growth rates of the exact best-k items selection bounds. The codes and datasets can be found in our Git Hub page.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, The Ohio State University, Columbus, Ohio, USA 2Department of Computer Science, Iowa State University, Ames, Iowa, USA 3Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio, USA.
Pseudocode Yes Algorithm 2 Epsilon-Quick-Select(S, k, ϵ, δ) (EQS)
Open Source Code Yes The codes and datasets can be found in our Git Hub page.9
Open Datasets Yes public election datasets provided by Pref Lib (Mattei and Walsh, 2013).
Dataset Splits No No explicit mention of train/validation/test dataset splits. The paper focuses on the sample complexity of selection algorithms, not on model training on datasets.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are mentioned for running experiments.
Software Dependencies No No specific software versions or dependencies (e.g., library names with version numbers) are mentioned.
Experiment Setup Yes We set δ = 0.01, and examine how the number of comparisons conducted increases with n. In Figure 1 (a), we set ϵ = 0.08, and in Figure 1 (b), we set ϵ = 0.001.