Online Submodular Resource Allocation with Applications to Rebalancing Shared Mobility Systems

Authors: Pier Giuseppe Sessa, Ilija Bogunovic, Andreas Krause, Maryam Kamgarpour

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Experiments: Learning to rebalance a Shared Mobility System. In this section, we evaluate our approach in a realistic case study of rebalancing the SMS of Louisville, KY, based on historical trip data.
Researcher Affiliation Academia 1ETH Zurich, Switzerland.
Pseudocode Yes Algorithm 1 Example of NO-REGRET algorithm class, with update rule of MWU (Freund & Schapire, 1997)
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of the code for the described methodology.
Open Datasets Yes Data from Louisville Advanced Planning Office (2020) include trips timestamps, starting and end coordinates of the dockless SMS of the city of Louisville, KY, for the year of 2019.
Dataset Splits No The paper uses data from 2019 to simulate user demand but does not specify any explicit training, validation, or test splits (e.g., percentages or counts).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., libraries, frameworks, or solvers with their versions) that would be needed for replication.
Experiment Setup Yes We consider N = 5 trucks, each dropping off 8 vehicles to one of the candidate regions... We let context zt = [zt[1], zt[2], zt[3]] 2 R3 represent average daily temperature, precipitation, and the users demand in day t... We use a composite kernel k(xt, zt) = k1( xt, zt[3]) k2(zt[1], zt[2]), where xt = PN i=1 xi + represents the total number of vehicles positioned in each region, k1 is a polynomial kernel of degree 3... and k2 is a squared-exponential kernel... We use two distinct models, depending on day t being a weekday or a weekend. Kernel hyperparameters are optimized offline over 100 random datapoints using a maximum likelihood method and kept fixed for the whole experiment duration.