No-Regret Learning in Dynamic Competition with Reference Effects Under Logit Demand

Authors: Mengzi Amy Guo, Donghao Ying, Javad Lavaei, Zuo-Jun Shen

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This work is dedicated to the algorithm design in a competitive framework, with the primary goal of learning a stable equilibrium. ... Despite the absence of typical properties required for the convergence of online games, such as strong monotonicity and variational stability, we demonstrate that under diminishing step-sizes, the price and reference price paths generated by OPGA converge to the unique SNE, thereby achieving the no-regret learning and a stable market. Moreover, with appropriate step-sizes, we prove that this convergence exhibits a rate of O(1/t).
Researcher Affiliation Academia Mengzi Amy Guo IEOR Department UC Berkeley mengzi_guo@berkeley.edu Donghao Ying IEOR Department UC Berkeley donghaoy@berkeley.edu Javad Lavaei IEOR Department UC Berkeley lavaei@berkeley.edu Zuo-Jun Max Shen IEOR Department UC Berkeley maxshen@berkeley.edu
Pseudocode Yes Algorithm 1 Online Projected Gradient Ascent (OPGA)
Open Source Code No The paper does not provide any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets No The paper describes 'numerical experiments' but does not mention the use of any publicly available datasets. The experiments are based on simulated parameters, not real-world data with public access.
Dataset Splits No The paper describes numerical experiments, but these experiments do not involve real-world datasets with typical training, validation, and test splits. The paper focuses on theoretical convergence.
Hardware Specification No The paper describes numerical experiments but does not provide any specific details about the hardware (e.g., CPU, GPU models) used to run these experiments.
Software Dependencies No The paper describes theoretical algorithms and numerical experiments but does not specify any software dependencies with version numbers.
Experiment Setup Yes Figure 1: Price and reference price paths for Examples 1, 2, and 3, where the parameters are (a H, b H, c H) = (8.70, 2.00, 0.82), (a L, b L, c L) = (4.30, 1.20, 0.32), (r0 H, r0 L) = (0.10, 2.95), (p0 H, p0 L) = (4.85, 4.86), and α = 0.90. ... in Example 1 (see Figure 1a) corroborates Theorem 5.1 by demonstrating that the price and reference price trajectories converge to the unique SNE when we choose diminishing step-sizes that fulfill the criteria specified in Theorem 5.1. ... In particular, Example 1 (see Figure 1a) corroborates Theorem 5.1 by demonstrating that the price and reference price trajectories converge to the unique SNE when we choose diminishing step-sizes that fulfill the criteria specified in Theorem 5.1. By comparison, the over-large constant step-sizes employed in Example 2 (see Figure 1b) fails to ensure convergence, leading to cyclic patterns in the long run.