Bayesian Functional Optimisation with Shape Prior
Authors: Pratibha Vellanki, Santu Rana, Sunil Gupta, David Rubin de Celis Leal, Alessandra Sutti, Murray Height, Svetha Venkatesh1617-1624
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach for short polymer fibre design and optimising learning rate schedules for deep networks. |
| Researcher Affiliation | Academia | 1Centre for Pattern Recognition and Data Analytics Deakin University, Geelong, Australia pratibha.vellanki, santu.rana, sunil.gupta, svetha.venkatesh@deakin.edu.au 2Institute for Frontier Materials, GTP Research, Deakin University, Geelong, Australia d.rubindecelisleal, alessandra.sutti, murray.height@deakin.edu.au |
| Pseudocode | Yes | Algorithm 1 Framework for control function optimisation. |
| Open Source Code | No | The code will be made available upon request. |
| Open Datasets | Yes | For CFIR10 we use a network architecture that be summarised as (Conv2D Dropout Conv2D Maxpooling2D) 3 Flatten (Dropout Dense) 3. Whereas for MNIST the network architecture used is (Conv2D Maxpooling2D Dropout Flatten Dense Dense). |
| Dataset Splits | No | The paper mentions "Validation error" in Table 1 for CFIR10 and MNIST datasets, implying the use of a validation set, but it does not provide specific details about the dataset split (e.g., percentages or sample counts) for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using "Adam" and "SGD" optimizers, and refers to "Kinga and Adam (2015)" for Adam, but it does not specify version numbers for any software dependencies like programming languages (e.g., Python), libraries (e.g., TensorFlow, PyTorch), or specific optimizers. |
| Experiment Setup | Yes | For all experiments, we start with a 5th order Bernstein polynomial basis, but limit the highest order to 10. The change of order is triggered due to hitting the derivative limit when it reaches 95% of the maximum derivative magnitude possible. For Bayesian optimisation the range of learning rate was chosen between 0.2 and 0.0001. For Adam and SGD the starting learning rate used was 0.01, with 0.8 momentum for SGD and default values for hyper-parameters of Adam (Kinga and Adam 2015). |