Bayesian Optimisation of Functions on Graphs
Authors: Xingchen Wan, Pierre Osselin, Henry Kenlay, Binxin Ru, Michael A Osborne, Xiaowen Dong
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both synthetic and real-world graphs demonstrate the effectiveness of the proposed optimisation framework. |
| Researcher Affiliation | Academia | Department of Engineering Science, University of Oxford {xwan,osselinp,kenlay,robin,mosb,xdong}@robots.ox.ac.uk |
| Pseudocode | Yes | an algorithmic description is available in Algorithm 1. ... Algorithm 1 Bayesian Optimisation on Graphs (Bayes Opt G) ... Algorithm 2 Selecting a local subgraph |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | We consider the following synthetic tasks: Maximising centrality scores (Fig. 5 and 6; Fig. 18 in App. C.2): we aim to find the node with maximum centrality measure, from a graph sampled from a random graph model... We consider the following real-life tasks: Identifying the patient zero (Fig. 8; Fig. 20 to 23 in App. C.3): ...We use a real-world contact network based on Bluetooth proximity [2]... Identifying influential users in a social network (Fig. 9): ...We use three real-world networks, namely the Enron email network [25], Facebook page network [33], and Twitch social network [32]. |
| Dataset Splits | Yes | We first validate the predictive power of the adopted kernels in controlled regression experiments. ... We compare the performance in terms of validation error and show the results in Fig. 4 (results for other graph types are shown in App. C.1). ... Figure 4: Validation of predictive powers of kernels considered on a BA graph of size n = 200 nodes and parameter m = 1, with (a) function values on the nodes corresponding to elements of the eigenvector corresponding to the second smallest eigenvalue and (b) same as above, but corrupted with noise standard deviation σ = 0.05. The leftmost column shows the visualisation of the ground truth, and the right columns show the GP posterior mean and standard deviation (error bars) learned by the different kernels against ground truth with Spearman correlation ρ and learned r 1(λ) (Eq. 1). |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. It makes no mention of the computing environment. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9). No software dependencies are mentioned with versions. |
| Experiment Setup | Yes | Determining the local subgraph size. The local subgraph size at iteration t (Qt) is a hyperparameter of the algorithm. We adapt the idea of trust regions from trust region-based optimisation algorithms [7, 13, 43] to adaptively set the size of Qt as the optimisation progresses: specifically, we initialise Q0 (initial neighbourhood size), succ_tol (success tolerance), fail_tol (failure tolerance) and γ > 1 (multiplier) as hyperparameters... We provide an ablation study in App. D to show the robustness of Bayes Opt G to hyperparameters. We report sensitivity analyses to the most important hyperparameters below, namely Q0 (initial trust region size), fail_tol, η (order of the kernels) and γ (the trust region multiplier in case of successive successes or failures). |