Learning Resource Allocation and Pricing for Cloud Profit Maximization

Authors: Bingqian Du, Chuan Wu, Zhiyi Huang7570-7577

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluation based on real-world datasets shows that our DRL approach outperforms basic DRL algorithms and state-of-the-art white-box online cloud resource allocation/pricing algorithms significantly, in terms of both profit and the number of accepted users.
Researcher Affiliation Academia Bingqian Du, Chuan Wu, Zhiyi Huang The University of Hong Kong
Pseudocode Yes Algorithm 1: DRL Algorithm for VM Placement and Pricing, LAPP
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes We make use of two sets of public traces: (i) Microsoft Azure dataset (Cortez et al. 2017), ... (ii) Google cluster-usage dataset (Reiss, Wilkes, and Hellerstein 2011)...
Dataset Splits Yes We extract one week s workload of a subscription from the Azure dataset for training our DRL model.
Hardware Specification Yes We implement LAPP using Tensor Flow on a server equipped with one Nvidia GTX 1080 GPU, Intel Xeon E5-1620 CPU with 4 cores, and 32GB memory.
Software Dependencies No The paper mentions 'Tensor Flow' but does not specify a version number for it or any other key software dependencies.
Experiment Setup Yes The actor NN we use has 300 and 400 neurons in the two fully-connected layers, respectively, and the output from the LSTM is a vector of 256 units (Ming et al. 2017); the activation function is softmax for outputting vi1 and rectifier for outputting vi2. The critic NN has 400 neurons in each fully-connected layer and the output of the LSTM layer has a size of 256; the activation function is rectifier for its output layer. The learning rates in the actor network and the critic network are 10 4 and 10 4, respectively. We set Batch Size = 32, γ = 0.99, and L = 4.