High Dimensional Level Set Estimation with Bayesian Neural Network

Authors: Huong Ha, Sunil Gupta, Santu Rana, Svetha Venkatesh12095-12103

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on both synthetic and real-world datasets show that our proposed method can achieve better results compared to existing state-of-the-art approaches.
Researcher Affiliation Academia Huong Ha*, Sunil Gupta, Santu Rana, Svetha Venkatesh Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong, Australia *Correspondence to: huong.ha@rmit.edu.au
Pseudocode Yes Algorithm 1 Exp HLSE: Explicit High Dimensional LSE via Bayesian Neural Network and Algorithm 2 Imp HLSE: Implicit High Dimensional LSE via Bayesian Neural Network
Open Source Code Yes Our source code is publicly available at https://github.com/Huong Ha12/Highdim LSE.
Open Datasets Yes We evaluate the performance of the methods on three ten dimensional benchmark test functions: Ackley10, Levy10 and Alpine10... For this task, we use the Rhodopsin-family protein dataset provided in (Karasuyama et al. 2018)... We use two benchmark datasets published in (Siegmund et al. 2015) for the software performance prediction problem: HSMGP (3456 data points) and HIPACC (13485 data points).
Dataset Splits No The paper mentions splitting data into training and validation sets for hyperparameter tuning ('we split the current observed data Dt into a training and a validation set'), but it does not provide specific percentages, sample counts, or explicit details about how these splits are defined or how to reproduce them.
Hardware Specification Yes All the experiments are running on multiple servers where each server has multiple Tesla V100 SXM2 32GB GPUs.
Software Dependencies No The paper mentions using 'Thermo-Calc software' and 'feedforward neural network (FNN)' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For all the problems with dimension d, the optimization process is initialized with an initial 3d points (for synthetic functions), and 5d points (for real-world problems) sampled following a latin hypercube sample scheme (Jones 2001). For all the tasks, the experiments were repeated 5 times for the synthetic functions and 3 times for the real-world experiments... The major hyper-parameters for the i NAS tuning process are the number of layer and the number of neurons per layer whilst the minor hyperparameters are the learning rate and the drop-out rate. The i NAS tuning process is initialized with a FNN with 1 layer and 256 neurons/layer... the batch size is set to 10d with d being the dimension of the LSE problem.