Contextual Gaussian Process Bandits with Neural Networks
Authors: Haoting Zhang, Jinghai He, Rhonda Righter, Zuo-Jun Shen, Zeyu Zheng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on both synthetic and practical problems, illustrating the effectiveness of our approach. |
| Researcher Affiliation | Academia | Department of Industrial Engineering & Operations Research University of California, Berkeley Berkeley, CA 94720 haoting_zhang,jinghai_he,rrighter,maxshen,zyzheng@berkeley.edu |
| Pseudocode | Yes | Algorithm 1 NN-AGP-UCB |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or links to repositories for the methodology described. |
| Open Datasets | No | The paper primarily uses synthetic data and generated data for its experiments (e.g., 'synthetic reward functions', 'sample θt from multivariate normal distributions', 'reward is generated by stochastic simulation'). It does not provide concrete access information or citations to any established publicly available datasets. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) needed to reproduce the data partitioning for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions software components like 'fully-connected neural network (FCN)', 'long short-term memory (LSTM) neural network', and 'graph convolutional neural network (GCN)', but it does not provide specific version numbers for any software dependencies, libraries, or frameworks used. |
| Experiment Setup | No | The paper mentions general setup details such as studying the effect of 'm' and random initialization for the first 20 iterations, and that 'βt' is a hyper-parameter. However, it does not provide specific concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations for the neural networks used in experiments. |