Preference Elicitation and Interview Minimization in Stable Matchings

Authors: Joanna Drummond, Craig Boutilier

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that Rt I compares favorably to a recent pure interview minimization algorithm, and that the number of interviews it requires is generally independent of the size of the market. Experimental results demonstrate the effectiveness of our scheme. We evaluate Rt I on a variety of randomly generated matching problems, using several different probabilistic models as well as preferences derived from real-world ratings data. All results are reported over 20 random matching instances.
Researcher Affiliation Academia Joanna Drummond and Craig Boutilier Department of Computer Science University of Toronto {jdrummond,cebly}@cs.toronto.edu
Pseudocode Yes Algorithm 1 Constructing Segments (|Gk| w, k) ... Algorithm 2 Calculating PMR for OPPI (|Gk| w, k) ... Algorithm 3 Refine-then-Interview Elicitation Scheme
Open Source Code No The paper does not include any explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes Using markets of size n = 124, 252, and a window size of w = 4, we first compare the two using partitioned preferences of the form needed by LGS: women s (employers ) preferences are drawn from a Mallows φ-model (Mallows 1957; Marden 1995)... Finally, we apply Rt I to the Movie Lens preference model, with n = 300, where agent preferences are determined using an affinity score based on how similarly they rank movies (see (Drummond and Boutilier 2013) for details). The footnote also points to http://www.grouplens.org/node/73, the 100K data set.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions models and algorithms like DFA, LGS, PPGS, and Luce-Shepard, but does not provide specific software library names with version numbers.
Experiment Setup Yes We set γ = 0.5, τ = 5, and normalize reported cognitive cost by e(γn) (as in DB). We first test Rt I on a Mallows distributions, varying the degree of preference correlation (or dispersion φ).