PASTA: Pessimistic Assortment Optimization
Authors: Juncheng Dong, Weibin Mo, Zhengling Qi, Cong Shi, Ethan X Fang, Vahid Tarokh
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We consider a class of assortment optimization problems in an offline data-driven setting. Numerical studies demonstrate the superiority of the proposed method over the existing baseline method. Experiments on the simulated datasets (so that θ is known) corroborate the efficacy of pessimistic assortment optimization. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Duke University, Durham, NC 27705, United States. 2Mitchell E. Daniels, Jr. School of Business, Purdue University, West Lafayette, IN 47907, United States. 3Department of Decision Sciences, George Washington University, Washington, DC 20052, United States. 4Herbert Business School, University of Miami, Coral Gables, FL 33146, United States. 5Department of Biostatistics and Bioinformatics, Duke University, Durham, NC 27705, United States. |
| Pseudocode | Yes | Algorithm 1 PASTA |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing source code for the methodology or a link to a code repository. |
| Open Datasets | No | We consider the assortment optimization scenarios described by N, K, d, n and p... we first generate the true preference vector θ as a uniformly random unit d-dim vector... Then, we generate an offline dataset D {Si, Ai, Ri}n i=1 with n samples... The paper uses 'simulated datasets' and describes a 'Data Generation' process, but does not refer to a publicly available or open dataset with access information. |
| Dataset Splits | No | The paper describes generating synthetic datasets but does not specify explicit train/validation/test splits (e.g., percentages, sample counts) for reproducibility using external data. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU/CPU models, memory) used to run its experiments. It only refers to 'numerical studies'. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python, PyTorch, or specific solvers). |
| Experiment Setup | Yes | For hyper-parameters, we set αn = 2pLML where pLML = pL n(θML,n) and the maximum of iteration T = 30. In all of our numerical studies, we set L = 2, rβ = 0.01 and c = 1/2, which performs well empirically. |