Online Revenue Maximization for Server Pricing

Authors: Shant Boodaghians, Federico Fusco, Stefano Leonardi, Yishay Mansour, Ruta Mehta

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper considers online revenue maximization for a unit capacity server, when jobs are non preemptive, in the Bayesian setting: at each time step, one job arrives, with parameters drawn from an underlying distribution. We design an efficiently computable truthful posted price mechanism, which maximizes revenue in expectation and in retrospect, up to additive error. The prices are posted prior to learning the agent s type, and the computed pricing scheme is deterministic We also show the pricing mechanism is robust to learning the job distribution from samples, where polynomially many samples suffice to obtain near optimal prices. To give these results, we first study the underlying optimization problem ignoring strategic considerations of the jobs. In Section 2 we model the problem of finding a revenue maximizing pricing strategy as a Markov Decision Process (MDP). In Section 3 we show that the revenue maximizing pricing strategy can be efficiently computed via backwards induction given the distribution. On the positive side, in Section 3.3 we prove that the optimal pricing strategy is monotone in length under a distributional assumption, which we show is satisfied when the jobs valuation follows a logconcave distribution, parametrized by length. Theorem 1. The online server pricing problem admits an optimal monotone pricing strategy when the variables L, V , and D satisfy assumption 1. Also, when V is finitely supported, an exact optimum can be computed in time O(TK3).
Researcher Affiliation Academia 1University of Illinois at Urbana-Champaign, Urbana IL 61801, USA 2Sapienza University of Rome, 00185 Rome, Italy 3Tel Aviv University, P.O. Box 39040, Tel Aviv 6997801, Israel
Pseudocode No The paper refers to algorithms like 'MODIFIEDBIA' and 'backwards induction algorithm (BIA)' but does not provide pseudocode or a clearly labeled algorithm block for them.
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the described methodology.
Open Datasets No The paper discusses learning an 'underlying distribution Q' from 'samples' conceptually. It does not refer to a specific named, public, or open dataset with access information (link, DOI, citation).
Dataset Splits No The paper is theoretical and discusses learning distributions from samples, but it does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for computations or experiments.
Software Dependencies No The paper does not list any specific software dependencies with version numbers.
Experiment Setup No The paper describes a theoretical model (MDP) and conditions for optimal policies, but it does not provide concrete experimental setup details such as hyperparameters, optimizer settings, or training configurations.