Communication Efficient Distributed Learning for Kernelized Contextual Bandits
Authors: Chuanhao Li, Huazheng Wang, Mengdi Wang, Hongning Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We performed extensive empirical evaluations on both synthetic and real-world datasets, and the results (averaged over 3 runs) are reported in Figure 1, 2 and 3, respectively. |
| Researcher Affiliation | Academia | 1University of Virginia 2Oregon State University 3Princeton University {cl5ev,hw5x}@virginia.edu huazheng.wang@oregonstate.edu mengdiw@princeton.edu |
| Pseudocode | Yes | Algorithm 1 Distributed Kernel UCB (Dis Kernel UCB); Algorithm 2 Approximated Distributed Kernel UCB (Approx-Dis Kernel UCB); Algorithm 3 Ridge Leverage Score Sampling (RLS) |
| Open Source Code | No | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] |
| Open Datasets | Yes | We performed extensive empirical evaluations on both synthetic and real-world datasets... Figure 2: Experiment results on UCI datasets. [8] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. Figure 3: Experiment results on Movie Lens & Yelp datasets. [13] F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. |
| Dataset Splits | No | The paper mentions using grid search for hyperparameters ('grid search for α in {0.1, 1, 4}'), which implies some form of validation, but it does not explicitly provide details about specific training, validation, or test dataset splits in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. It states '[N/A]' for the question regarding compute resources in the self-assessment. |
| Software Dependencies | No | The paper mentions using a 'Gaussian kernel' but does not specify any software libraries or frameworks (e.g., TensorFlow, PyTorch, scikit-learn) with their version numbers that were used for implementation. |
| Experiment Setup | Yes | For all the kernelized algorithms, we used the Gaussian kernel k(x, y) = exp( γ x y 2). We did a grid search of γ {0.1, 1, 4} for kernelized algorithms, and set D = 20 for Dis Lin UCB and Dis Kernel UCB, D = 5 for Approx-Dis Kernel UCB. For all algorithms, instead of using their theoretically derived exploration coefficient α, we followed the convention [20, 32] to use grid search for α in {0.1, 1, 4}. |