Choice Set Optimization Under Discrete Choice Models of Group Decisions

Authors: Kiran Tomlinson, Austin Benson

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We design approximation algorithms for the hard problems and show that they work well on real-world choice data.
Researcher Affiliation Academia 1Department of Computer Science, Cornell University, Ithaca, New York, USA. Correspondence to: Kiran Tomlinson <kt@cs.cornell.edu>, Austin R. Benson <arb@cs.cornell.edu>.
Pseudocode Yes Algorithm 1 ε-additive approximation for AGREEMENT in the MNL model. Input: n individuals A, k items C, m alternatives C, utilities ua( ) > 0 for each a A.
Open Source Code Yes Our code and data are available at https://github. com/tomlinsonk/choice-set-opt.
Open Datasets Yes The SFWORK dataset (Koppelman & Bhat, 2006) comes from a survey of San Francisco residents on available (choice set) and selected (choice) transportation options to get to work. The ALLSTATE dataset (Kaggle, 2014) consists of insurance policies (items)... The YOOCHOOSE dataset (Ben-Shimon et al., 2015) contains online shopping data...
Dataset Splits No The paper describes splitting 'respondents into two segments' or 'data into homeowners and non-homeowners' to model different types of individuals/sub-populations, and Table 1 shows 'percent of observations in sub-population splits', but it does not specify standard train/validation/test dataset splits for model training and evaluation.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper mentions using 'PyTorch' and 'Gurobi's implementation' but does not specify version numbers for these or other software dependencies.
Experiment Setup Yes For inferring maximum-likelihood models from data, we use Py Torch s Adam optimizer (Kingma & Ba, 2015; Paszke et al., 2019) with learning rate 0.05, weight decay 0.00025, batch size 128, and the amsgrad flag (Reddi et al., 2018).