Learning Mixed Multinomial Logits with Provable Guarantees

Authors: Yiqun Hu, David Simchi-Levi, Zhenzhen Yan

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct simulation studies to evaluate the performance and demonstrate how to apply our algorithm to real-world applications.
Researcher Affiliation Academia Yiqun Hu MIT huyiqun@mit.edu David Simchi-Levi MIT dslevi@mit.edu Zhenzhen Yan Nanyang Technological University yanzz@ntu.edu.sg
Pseudocode Yes Algorithm 1: Stochastic Subregion Frank-Wolfe
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix B. The code for generating synthetic data can be found on our Github repo: https://github.com/yiqunhu/SSRFW.
Open Datasets No We apply our SSRFW algorithm to a dataset of customer choices from a large online retailer... The dataset consists of customer choices of mobile phone brands for 21 weeks between November 2021 and March 2022.
Dataset Splits No We split the data into 16 weeks for training and the last 5 weeks for testing.
Hardware Specification No Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]
Software Dependencies No The code for generating synthetic data can be found on our Github repo: https://github.com/yiqunhu/SSRFW.
Experiment Setup Yes The number of subsamples for Q-construction is set to L = 500, and n = 100 as the subsample size. The number of iterations for SSRFW is set to 200. We calibrate all other parameters to be consistent with our synthetic data experiments, i.e., M = 5, D = 2.