Efficient Algorithms for Learning Revenue-Maximizing Two-Part Tariffs

Authors: Maria-Florina Balcan, Siddharth Prasad, Tuomas Sandholm

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We ran Gurobi (the fastest general-purpose mixed integer program solver) to find the revenue-maximizing single TPT for a single buyer (after formulating this problem as an integer program), and Algorithm 1 beat it dramatically. For example, averaged over 10 runs on randomly generated instances with K = 5 units and N = 600 samples, our algorithm returned the revenue-maximizing TPT in under 23 minutes while Gurobi took over 3.5 hours.
Researcher Affiliation Collaboration Maria-Florina Balcan1 , Siddharth Prasad1 and Tuomas Sandholm1,2,3,4 1School of Computer Science, Carnegie Mellon University 2Optimized Markets, Inc. 3Strategy Robot, Inc. 4Strategic Machine, Inc.
Pseudocode Yes Algorithm 1 Single TPT for a Single Buyer
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to code repositories.
Open Datasets No The paper mentions using "randomly generated instances" for experiments but does not provide concrete access information (like a link, DOI, or formal citation) for a publicly available or open dataset.
Dataset Splits No The paper refers to a "set of samples S" and drawing fresh samples but does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or specific predefined splits with citations).
Hardware Specification No The paper mentions running Gurobi but does not specify any hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for the experiments.
Software Dependencies No The paper mentions "Gurobi" as a solver but does not specify its version number or any other software dependencies with version information.
Experiment Setup No The paper mentions parameters for randomly generated instances (K=5 units, N=600 samples) but does not provide specific experimental setup details such as hyperparameters, optimizer settings, or other system-level training configurations for the algorithms themselves.