Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Neural Dueling Bandits: Preference-Based Optimization with Human Feedback

Authors: Arun Verma, Zhongxiang Dai, Xiaoqiang Lin, Patrick Jaillet, Bryan Kian Hsiang Low

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on the problem instances derived from synthetic datasets corroborate our theoretical results. Finally, we empirically validate the different performance aspects of our proposed algorithms in Section 5 using problem instances derived from synthetic datasets.
Researcher Affiliation Academia 1Singapore-MIT Alliance for Research and Technology, Republic of Singapore 2The Chinese University of Hong Kong, Shenzhen, China 3Department of Computer Science, National University of Singapore, Republic of Singapore 4LIDS and EECS, Massachusetts Institute of Technology, USA
Pseudocode Yes NDB-UCB Algorithm for Neural Dueling Bandit based on Upper Confidence Bound
Open Source Code No The paper does not provide an explicit statement about releasing code for the methodology described, nor does it provide a direct link to a code repository.
Open Datasets No Experimental results on the problem instances derived from synthetic datasets corroborate our theoretical results. Finally, we empirically validate the different performance aspects of our proposed algorithms in Section 5 using problem instances derived from synthetic datasets.
Dataset Splits No The paper uses synthetic datasets and describes how the features are generated (e.g., 'sampled uniformly at random from (-1, 1)'), but does not specify any training, testing, or validation splits for these datasets.
Hardware Specification Yes All the experiments are run on a server with AMD EPYC 7543 32-Core Processor, 256GB RAM, and 8 Ge Force RTX 3080.
Software Dependencies No The paper does not specify version numbers for any software libraries or programming languages used for implementing their methods. It mentions tools like Sentence-BERT but without version details.
Experiment Setup Yes In all our experiments, we use a NN with 2 hidden layers with width 50, λ = 1.0, δ = 0.05, d = 5, K = 5, and fixed value of νT = ν = 1.0. We retrain the NN after every 20 rounds and set the number of gradient steps to 50 in all our experiments.