Decentralized High-Dimensional Bayesian Optimization With Factor Graphs

Authors: Trong Nghia Hoang, Quang Minh Hoang, Ruofei Ouyang, Kian Hsiang Low

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluation on synthetic and real-world experiments (e.g., sparse Gaussian process model with 1811 hyperparameters) shows that DEC-HBO outperforms the state-of-the-art HBO algorithms.
Researcher Affiliation Academia Laboratory of Information and Decision Systems, Massachusetts Institute of Technology, USA, Department of Computer Science, National University of Singapore, Republic of Singapore
Pseudocode Yes Message Passing Protocol. In iteration t + 1, let mϕI t x(i)(h) and mx(i) ϕI t (h) denote messages to be passed from a factor node ϕI t (x I) (i.e., a local acquisition function) to a variable node x(i) (i.e., component i I of its input x I) and from x(i) back to ϕI t (x I), respectively. Given x(i) h, mϕI t x(i)(h) max h Ii D(x Ii) Δ i ϕI t (h Ii) + ϕI t (h Ii, h), mx(i) ϕI t (h) I A(i)\{I} mϕI t x(i)(h) (3)
Open Source Code No The paper mentions a third-party implementation for comparison: 'github.com/ziyuw/rembo', but does not state that its own source code is available.
Open Datasets Yes Physicochemical properties of protein tertiary structure (Rana 2013) and CIFAR-10
Dataset Splits Yes PIC: '95% and 5% of the dataset are used as training and test data, respectively. The training data is further divided into 5 equal folds.' CNN: 'The CNN model is trained using the CIFAR-10 object recognition dataset which has 50000 training images and 10000 test images, each of which belongs to one of the ten classes. 5000 training images are set aside as the validation data.'
Hardware Specification No The paper mentions 'our machine has' without specifying any concrete hardware components like CPU/GPU models or memory.
Software Dependencies No The paper mentions 'keras' for CNN and 'MATLAB' for REMBO (baseline) but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The six CNN hyperparameters to be optimized in our experiments include the learning rate of SGD in the range of [10 5, 1], three dropout rates in the range of [0, 1], batch size in the range of [100, 1000], and number of learning epochs in the range of [100, 1000].