On Context-Dependent Clustering of Bandits
Authors: Claudio Gentile, Shuai Li, Purushottam Kar, Alexandros Karatzoglou, Giovanni Zappella, Evans Etrue
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We investigate a novel cluster-of-bandit algorithm CAB for collaborative recommendation tasks that implements the underlying feedback sharing mechanism by estimating user neighborhoods in a context-dependent manner. Experiments on production and real-world datasets show that CAB offers significantly increased prediction performance against a representative pool of state-of-the-art methods. |
| Researcher Affiliation | Collaboration | 1Di STA, University of Insubria, Italy 2University of Cambridge, United Kingdom 3IIT Kanpur, India 4Telefonica Research, Spain 5Amazon Dev Center, Germany (work done while at the University of Milan, Italy) |
| Pseudocode | Yes | Algorithm 1 Context-Aware clustering of Bandits (CAB). |
| Open Source Code | No | The paper does not provide any specific links to open-source code for the methodology described. |
| Open Datasets | Yes | KDD Cup. This dataset was released for the KDD Cup 2012 Online Advertising Competition7 where the instances were derived from the session logs of the search engine soso.com. ... Avazu. This dataset was released for the Avazu Click Through Rate Prediction Challenge on Kaggle8. ... Last FM and Delicious. These two datasets9 are extracted from the music streaming service Last.fm and the social bookmarking web service Delicious. |
| Dataset Splits | Yes | We used the first 20% of each dataset to tune the algorithms parameters through a grid search, and report results on the remaining 80%. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or solvers). |
| Experiment Setup | Yes | We used the first 20% of each dataset to tune the algorithms parameters through a grid search, and report results on the remaining 80%. ... We tuned α for all algorithms across the grid {0, 0.01, 0.02, . . . , 0.2}. The α2 parameter in CLUB was tuned within {0.1, 0, 2, . . . , 0.5}. The number of clusters in Dyn UCB was increased in an exponential progression, starting from 1, and ending to n. Finally, the γ parameter in CAB was simply set to 0.2. |