Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Market Interfaces for Electric Vehicle Charging
Authors: Sebastian Stein, Enrico H. Gerding, Adrian Nedea, Avi Rosenfeld, Nicholas R. Jennings
JAIR 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In two experiments with over 300 users, we show that restricting the users preferences significantly reduces the time they spend deliberating (by up to half in some cases). An extensive usability survey confirms that this restriction is furthermore associated with a lower perceived cognitive burden on the users. More surprisingly, at the same time, using restricted interfaces leads to an increase in the users performance compared to the fully expressive interface (by up to 70%). |
| Researcher Affiliation | Academia | Sebastian Stein EMAIL Enrico H. Gerding EMAIL Adrian Nedea EMAIL University of Southampton, Southampton, United Kingdom Avi Rosenfeld EMAIL Jerusalem College of Technology, Jerusalem, Israel Nicholas R. Jennings EMAIL Imperial College, London, United Kingdom King Abdulaziz University, Jeddah, Saudi Arabia |
| Pseudocode | Yes | Algorithm 1 Optimal Solution Algorithm 2 Reinforcement Learning Agent Algorithm 3 Report Selection Function for Fully-Expressive Algorithm 4 Report Selection Function for FINITE and SMV |
| Open Source Code | Yes | The game can be accessed at http://www.bid2charge.com/jair and the source code is available at https://github.com/soton-agents/bid2charge. |
| Open Datasets | No | The paper describes generating its own experimental data through user interaction with a game (Bid2Charge) and sets up the parameters for this environment (e.g., tasks, probabilities, market conditions), but does not provide a publicly available or open dataset of the *collected user data* or an external, well-known dataset used in the experiments. |
| Dataset Splits | No | The paper describes its experimental setup including game durations (e.g., "30 simulated days" for the first experiment, "three identical 10-day games in sequence" for the second) and task configurations. This refers to the structure of the *simulated environment* for user interaction rather than conventional training/test/validation splits of a pre-existing dataset. |
| Hardware Specification | Yes | For example, computing the optimal policy for the first experiment in Section 7 using a Python implementation of the above algorithm took 4.75 seconds on an Apple Mac Book Pro with a 3.3 GHz Intel Core i7 CPU and 16GB RAM (including the Monte Carlo simulation of Xd and Pd). |
| Software Dependencies | No | The paper mentions "using a Python implementation" for solving the optimal policy but does not specify a Python version or any libraries with version numbers. |
| Experiment Setup | Yes | The game in the first experiment was played for 30 simulated days, and we varied the number of tasks every 1 4 days (with between 1 6 tasks available every day). ... In the second experiment, a 10-day game was played three times by each player... Specifically, each price was determined by first setting pd,0 = 0, and then iteratively determining each pd,x as pd,x = pd,x 1 +ϵx, where ϵx was drawn from a uniform distribution U(0.2x 0.2, 0.4x+0.6). ... Tables 3 and 4 show the set of potentially available journeys for each day in the first and second experiment, respectively. |