Secretary Ranking with Minimal Inversions

Authors: Sepehr Assadi, Eric Balkanski, Renato Leme

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main result is a matching upper and lower bound for the secretary ranking problem. We present an algorithm that ranks n elements with only O(n3/2) inversions in expectation, and show that any algorithm necessarily suffers Ω(n3/2) inversions when there are n available positions. In terms of techniques, the analysis of our algorithm draws connections to linear probing in the hashing literature, while our lower bound result relies on a general anti-concentration bound for a generic balls and bins sampling process.
Researcher Affiliation Collaboration Sepehr Assadi Rutgers University sepehr.assadi@rutgers.edu Eric Balkanski Harvard University ebalkans@gmail.com Renato Paes Leme Google Research renatoppl@google.com
Pseudocode Yes ALGORITHM 1: Dense Ranking 1 Input: a set R of n positions, denoted here by [n], and at most n online arrivals. 2 for any time step t [n] and element at do 3 Define rt := |{at | at < at and t < t}|. 4 Sample xt uniformly in the real interval [rt n t , (rt + 1) n t ] and choose erk(at) = xt . 5 Set the learned rank of at as π(at) = arg mini R i erk(at) and remove i from R.
Open Source Code No The paper does not provide any statement or link indicating the release of open-source code for the described methodology.
Open Datasets No The paper describes a theoretical problem and algorithms; it does not use or refer to any specific dataset for training or evaluation.
Dataset Splits No The paper describes a theoretical problem and algorithms; it does not involve data splitting for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any experimental hardware specifications.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not include details on experimental setup such as hyperparameters or system-level training settings.