Improved Regret for Bandit Convex Optimization with Delayed Feedback

Authors: Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we compare our D-FTBL against GOLD [Héliou et al., 2020] and improved GOLD [Bistritz et al., 2022] by conducting simulation experiments on two publicly available data sets ijcnn1 and SUSY from the LIBSVM repository [Chang and Lin, 2011]. All algorithms are implemented with Python, and tested on a laptop with 2.4GHz CPU and 16GB memory.
Researcher Affiliation Academia 1School of Software Technology, Zhejiang University, Ningbo, China 2State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China 3Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security, Hangzhou, China 4National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
Pseudocode Yes Algorithm 1 Delayed Follow-The-Bandit-Leader
Open Source Code No Any interested people can send the authors an email to query the source code.
Open Datasets Yes We compare our D-FTBL against GOLD [Héliou et al., 2020] and improved GOLD [Bistritz et al., 2022] by conducting simulation experiments on two publicly available data sets ijcnn1 and SUSY from the LIBSVM repository [Chang and Lin, 2011].
Dataset Splits No For all algorithms, c and c are respectively selected from {0.1, 1.0, 10} and {0.01, 0.1, . . . , 100} simply according to their performance for d = 200.
Hardware Specification Yes All algorithms are implemented with Python, and tested on a laptop with 2.4GHz CPU and 16GB memory.
Software Dependencies No All algorithms are implemented with Python, and tested on a laptop with 2.4GHz CPU and 16GB memory.
Experiment Setup Yes According to the previous discussions about Theorem 1, we set α = 0, K = n T , δ = c n T 1/4, and η = c / max{ Td, n T 3/4} for our D-FTBL by tuning these two constants c and c . For those two baselines, we only need to set parameters δ and η. In addition to the theoretically suggested value of δ and η, we also introduce c and c as the scale factor, respectively. For all algorithms, c and c are respectively selected from {0.1, 1.0, 10} and {0.01, 0.1, . . . , 100} simply according to their performance for d = 200. Moreover, due to the randomness of these algorithms, we repeat them 20 times and report the average of their total loss.