Parallel Online Clustering of Bandits via Hedonic Game

Authors: Xiaotong Cheng, Cheng Pan, Setareh Maghsudi

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of our algorithm using synthetic and real-world datasets. Besides, we compare the results to some state-of-the-art bandit and clustering of bandits algorithms.
Researcher Affiliation Academia 1Department of Computer Science, University of T ubingen, T ubingen, Germany.
Pseudocode Yes Algorithm 1 CLUB-HG; Algorithm 2 Hedonic Clustering Game
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes Netflix Dataset. Netflix Movie Rating Dataset from Netflix s Netflix Prize competition on https://www.kaggle.com/datasets/ rishitjavia/netflix-movie-rating-dataset? resource=download; Movie Lens Dataset. Movie Lens 25M Movie Ratings Dataset on https:// grouplens.org/datasets/movielens/
Dataset Splits No The paper describes data extraction and processing (e.g., for Netflix, 'We extract 103 movies with the most ratings and n = 200 users...'), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts for each split).
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper does not mention specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes Input: Exploration parameter: αj(t), hedonic clustering accuracy parameter βt.; Initialization bi,0 = 0 Rd and M i,0 = I Rd d, i V ; Clusters ˆV1,1 = V , number of clusters m1 = 1; We set L = 10, d = 5 and T = 1000. ... We set σ = 0.1. All regret plots are based on the average results of 20 independent runs.