Fast Routing under Uncertainty: Adaptive Learning in Congestion Games via Exponential Weights
Authors: Dong Quan Vu, Kimon Antonakopoulos, Panayotis Mertikopoulos
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Numerical Experiments |
| Researcher Affiliation | Collaboration | Dong Quan Vu Kimon Antonakopoulos Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG 38000, Grenoble, France dong-quan.vu@inria.fr kimon.antonakopoulos@inria.fr Panayotis Mertikopoulos Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LIG 38000, Grenoble, France Criteo AI Lab panayotis.mertikopoulos@imag.fr |
| Pseudocode | Yes | Algorithm 1: Exponential weights (EXPWEIGHT) Algorithm 2: Accelerated exponential weights (ACCELEWEIGHT) Algorithm 3: adaptive exponential weights (ADAWEIGHT) |
| Open Source Code | No | The paper does not provide a link to its own open-source code for the described methodology. It mentions: "We leave the details of this efficient implementation for future works." |
| Open Datasets | Yes | To illustrate the performance of the algorithms in the static case, we take advantage of a real data set collected and provided by [18] (with free license). This data set contains no personally identifiable information or offensive content. In this section, we present the experimental results on one instance in this data set: the Sioux Falls network (originated by [28]). [18] Transportation Networks for Research Core Team. Transportation networks for research project. https: //github.com/bstabler/Transportation Networks, 2021. |
| Dataset Splits | No | The paper describes the setup for experiments (e.g., network size, number of O/D pairs, noise distribution), but it does not specify how the data (e.g., the Sioux Falls network) was split into training, validation, or test sets. The experiments focus on the convergence of algorithms rather than model training/evaluation on a split dataset in the traditional supervised learning sense. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | Yes | First, we conduct an experiment on a routing game with noisy observations (as described in Example 4) with the following setup: on a randomly generated network with 24 vertices and 276 edges, we assign to each edge a BPR cost function and choose N = 6 origin-destination pairs, each of which comes with a traffic demand; the traffic will be routed via the set of P = 3366 paths. Particularly, at each time T, after the algorithms decide their flow profiles, we perturb the induced costs on each edge e by a noise ωT e that is drawn independently from the normal distribution N(0, 10). The algorithms only observe these stochastically perturbed costs and use them to update the next iterations. ... The Sioux Falls network has 24 vertices and 76 edges. We choose N = 30 origin-destination pairs and the total number of paths in use is P = 3000. ... We run EXPWEIGHT, ACCELEWEIGHT and ADAWEIGHT in 16000 iterations and report the results thereof in Figure 1b. |