Congestion Games for V2G-Enabled EV Charging

Authors: Benny Lutati, Vadim Levit, Tal Grinshpoun, Amnon Meisels

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental A detailed empirical evaluation assesses the performance of the iterated best-response process. The evaluation considers the quality of the resulting solutions and the rate of convergence to a stable state.
Researcher Affiliation Academia 1Department of Computer Science Ben-Gurion University of the Negev, Be er-Sheva, Israel 2Department of Industrial Engineering and Management Ariel University, Ariel, Israel
Pseudocode Yes Algorithm 1 Find Best Response (ai, di, qi, d)
Open Source Code No The paper does not provide any statements or links indicating that its source code is publicly available.
Open Datasets No The problems used in this evaluation were randomly generated according to the following process. First, the number of agents V and time-slots T were given to each experiment as parameters. Next, a background power load was randomly selected for each time-slot from the range [0, |V |/2]. Then, the EVs preferences were generated by randomly selecting the arrival and departure times (in the range [0, |T|]), as well as the amount of energy units that each EV needs to charge. This amount was defined by a natural number randomly selected from the range [0, 100]. All selections were made with uniform distribution. The paper does not provide access information for this generated data.
Dataset Splits No The paper describes generating random problems and instances for evaluation but does not specify any training, validation, or test dataset splits or cross-validation setups. It refers to "200 randomly generated problems" for evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not provide any specific software dependencies or version numbers (e.g., programming languages, libraries, or solvers with versions) used for the implementation or experiments.
Experiment Setup No The paper describes how the input problems were generated (e.g., number of agents, time-slots, background load) and discusses player ordering for convergence (Round-robin, Expensive first). However, it does not specify concrete hyperparameters or system-level training settings typically found in experimental setups for models (e.g., learning rates, batch sizes, number of epochs, optimizer details).