Incentivizing Reliability in Demand-Side Response

Authors: Hongyao Ma, Valentin Robu, Na Li, David C. Parkes

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an experimental evaluation with a wide range of parameter values, we show that the mechanisms achieve close to the first best (i.e. assuming the mechanism knows agent types and can choose the most reliable ones) with regard to the number of agents who are selected and asked to prepare. We show in this section via simulation that the direct and indirect mechanisms have good performance, comparing with the best possible outcome (in a world without private information) as well as the spot auction.
Researcher Affiliation Academia Hongyao Ma Harvard University hma@seas.harvard.edu Valentin Robu Heriot Watt University V.Robu@hw.ac.uk Na Li Harvard University nali@seas.harvard.edu David C. Parkes Harvard University parkes@eecs.harvard.edu
Pseudocode No The paper describes mechanisms and mathematical models but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any statement about releasing source code for the methodology, nor does it provide a link to a code repository.
Open Datasets No The paper describes generating synthetic data for simulations based on uniform distributions (e.g., 'vi U[0, 2], ci U[0, 2], pi U[0, 1]') rather than using an existing publicly available or open dataset.
Dataset Splits No The paper describes simulation experiments with randomly generated data ('average number of selected agents over 1000 economies', 'computed over 1 million economies'), but does not specify training, validation, or test dataset splits as it's not a machine learning model trained on a fixed dataset.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used for running its simulations.
Software Dependencies No The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or solvers used for the simulations.
Experiment Setup Yes Let the total number of agents be n = 500 and the types be iid from the distributions: vi U[0, 2], ci U[0, 2], pi U[0, 1]. With varying τ from 0.9 to 0.999, the average number of selected agents over 1000 economies are as shown in Figure 3(a). Fixing τ = 0.98, the effect of varying reward R is as shown in Figure 3(b). computed over 1 million economies.