Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Spectral bandits
Authors: Tomáš Kocák, Rémi Munos, Branislav Kveton, Shipra Agrawal, Michal Valko
JMLR 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the empirical regret as well the empirical computational complexity of Spectral TS, Spectral UCB, Linear TS, and Lin UCB on artificial datasets with different types of underlying graph structure as well as on Movie Lens and Flixster datasets. |
| Researcher Affiliation | Collaboration | Tomáš Kocak EMAIL ENS de Lyon, 15 Parvis René Descartes, 69342 Lyon, France; Rémi Munos EMAIL DeepMind Paris, 14 Rue de Londres, 75009 Paris, France; Branislav Kveton EMAIL Google Research, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States; Shipra Agrawal EMAIL Columbia University, West 120th Street, New York, NY, 10027 United States; Michal Valko EMAIL DeepMind Paris, 14 Rue de Londres, 75009 Paris, France |
| Pseudocode | Yes | Algorithm 1 Spectral UCB; Algorithm 2 Spectral TS; Algorithm 3 Spectral Eliminator |
| Open Source Code | No | The paper does not provide an explicit statement about the release of source code for the methodology described, nor does it include a link to a code repository. |
| Open Datasets | Yes | as well as on Movie Lens and Flixster datasets. [...] In this experiment, we take user preferences and the similarity graph over movies from the Movie Lens dataset (Lam and Herlocker, 2012), a dataset of 6k users who rated one million movies. [...] We also perform experiments on users preferences from the movie recommendation website Flixster. The social network of the users was crawled by Jamali and Ester (2010) |
| Dataset Splits | Yes | Then we divide the dataset into three parts. The first is used to build our model of users, the rating that user i assigns to movie j. ... The second part of the dataset is used for parameter estimation. ... The last part of the dataset is used to build our similarity graph over movies. ... Table 5 summarizes the best parameters learned on training part of the dataset. We use the parameters to run the algorithms on test part. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers for libraries, programming languages, or frameworks used in the implementation or experiments. |
| Experiment Setup | Yes | In all experiments, we set both the confidence parameter δ, use the uniformly distributed noise satisfying R 0.05, and average over 5 runs. [...] set the regularization parameter λ and confidence ellipsoid parameters v (TS) and c (UCB) respectively to the best empirical value over a grid search. [...] Table 1: The best-performing empirical parameters for the Erd os-R enyi graph model. |