Preference Modeling with Context-Dependent Salient Features

Authors: Amanda Bower, Laura Balzano

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also provide empirical results that support our theoretical bounds and illustrate how our model explains systematic intransitivity. Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set and comparison data about the compactness of legislative districts in the US.
Researcher Affiliation Academia 1Department of Mathematics, University of Michigan, Ann Arbor, MI 2Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI. Correspondence to: Amanda Bower <amandarg@umich.edu>.
Pseudocode No The paper describes mathematical formulations and derivations, but it does not include a distinct pseudocode block or an algorithm box.
Open Source Code No The paper states 'See Sections 14.1, 14.2, and 14.8 of the Supplement for additional details about the algorithm implementation, data, preprocessing, hyperparameter selection, and training and validation error for both synthetic and real data experiments.' This indicates details about implementation but does not explicitly state that source code for the methodology is being released or provide a link.
Open Datasets Yes Finally we demonstrate strong performance of maximum likelihood estimation of our model on both synthetic data and two real data sets: the UT Zappos50K data set (Yu and Grauman, 2014; 2017) and comparison data about the compactness of legislative districts in the US (Kaufman et al., 2017).
Dataset Splits Yes The k-wise ranking data sets are used for validation and testing. ... See Table 2 for the average pairwise comparison accuracy over ten train (70%), validation (15%), and test splits (15%) of the data.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper describes the mathematical models and experimental setups but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For the following experiments, we use the top-t selection function for the salient feature preference model, where t is treated as a hyperparameter and tuned on a validation set. We append an ℓ2 penalty to Lm for the salient feature preference model and the FBTL model, that is, for regularization parameter µ, we solve minw∈Rd Lm(w) + µkwk2 2. ... the hyperparameters for the salient feature preference model are t for the top-t selection function and µ, the hyperparameter for FBTL is µ, the hyperparameter for Ranking SVM is the coefficient corresponding to the norm of the learned hyperplane, and the hyperparameters for Rank Net are the number of nodes in the single hidden layer and the coefficient for the ℓ2 regularization of the weights.