Variational Bayesian Phylogenetic Inference
Authors: Cheng Zhang, Frederick A. Matsen IV
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic data and real data Bayesian phylogenetic inference problems demonstrate the effectiveness and efficiency of our methods. |
| Researcher Affiliation | Academia | Cheng Zhang, Frederick A. Matsen IV Computational Biology Program Fred Hutchinson Cancer Research Center Seattle, WA 98109, USA {czhang23,matsen}@fredhutch.org |
| Pseudocode | Yes | See algorithm 1 in Appendix B for a basic variational Bayesian phylogenetic inference (VBPI) approach. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | In the second set of experiments we evaluate the proposed variational Bayesian phylogenetic inference (VBPI) algorithms at estimating unrooted phylogenetic tree posteriors on 8 real datasets commonly used to benchmark phylogenetic MCMC methods (Lakner et al., 2008; H ohna & Drummond, 2012; Larget, 2013; Whidden & Matsen IV, 2015) (Table 1). |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages or sample counts for training, validation, and test sets) for reproducibility. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., exact GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | Yes | We run Mr Bayes with 4 chains and 10 runs for two million iterations, sampling every 100 iterations. (Referring to Mr Bayes 3.2.5 mentioned earlier: "compare to Mr Bayes 3.2.5 (Ronquist et al., 2012)") |
| Experiment Setup | Yes | For VIMCO, we use Adam for stochastic gradient ascent with learning rate 0.001 (Kingma & Ba, 2015). For RWS, we also use AMSGrad (Sashank et al., 2018), a recent variant of Adam, when Adam is unstable. Results were collected after 200,000 parameter updates.We use a slightly larger learning rate (0.002) in AMSGrad for RWS.We use Adam with learning rate 0.001 to train the variational approximations using VIMCO and RWS estimators with 10 and 20 samples.Following Rezende & Mohamed (2015), we use a simple annealed version of the lower bound which was found to provide better results. The modified bound is: LK βt(φ, ψ) = EQφ,ψ(τ 1:K, q1:K) log [p(Y |τ i, qi)]βtp(τ i, qi) Qφ(τ i)Qψ(qi|τ i) where βt [0, 1] is an inverse temperature that follows a schedule βt = min(0.001, t/100000), going from 0.001 to 1 after 100000 iterations. |