Bandit Multi-linear DR-Submodular Maximization and Its Applications on Adversarial Submodular Bandits

Authors: Zongqi Wan, Jialin Zhang, Wei Chen, Xiaoming Sun, Zhijie Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We investigate the online bandit learning of the monotone multi-linear DR-submodular functions, designing the algorithm Bandit MLSM that attains O(T 2/3 log T) of (1 1/e)-regret. Then we reduce submodular bandit with partition matroid constraint and bandit sequential monotone maximization to the online bandit learning of the monotone multi-linear DR-submodular functions, attaining O(T 2/3 log T) of (1 1/e)-regret in both problems, which improve the existing results.
Researcher Affiliation Collaboration 1Institute of Computing Technology, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3Microsoft Research 4Center for Applied Mathematics of Fujian Province, School of Mathematics and Statistics, Fuzhou University.
Pseudocode Yes Algorithm 1 Bandit MLSM(η, L, Φ) [...] Algorithm 2 MLSMWrapper(η, L, Φ, EXT) [...] Algorithm 3 Bandit DRSM(η, δ, L, Φ) [...] Algorithm 4 Bandit MLSM4PS(η, L, Φ)
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets No The paper is a theoretical work focusing on algorithms and regret bounds, and does not use or mention any specific publicly available datasets.
Dataset Splits No The paper is theoretical and does not describe empirical experiments, thus no dataset split information for validation is provided.
Hardware Specification No The paper is theoretical and does not report on empirical experiments, thus no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and focuses on algorithm design and proofs. It does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not report on empirical experiments, thus no details on experimental setup like hyperparameters or training settings are provided.