Efficient Projection onto the Perfect Phylogeny Model

Authors: Bei Jia, Surjyendu Ray, Sam Safavi, José Bento

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our algorithm to solve (3) exactly in a finite number of steps is of interest in itself. Still, it is interesting to compare it with other algorithms. In particular, we compare the convergence rate of our algorithm with two popular methods that solve (3) iteratively: the Alternating Direction Method of Multipliers (ADMM), and the Projected Gradient Descent (PGD) method. We apply the ADMM, and the PGD, to both the primal formulation (3), and the dual formulation (4). We implemented all the algorithms in C, and derived closed-form updates for ADMM and PG, see Appendix F. We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor.
Researcher Affiliation Academia Bei Jia jiabe@bc.edu Surjyendu Ray raysc@bc.edu Boston College Sam Safavi safavisa@bc.edu José Bento jose.bento@bc.edu
Pseudocode Yes Algorithm 1 Projection onto the PPM (input: T and ˆF; output: M and F )
Open Source Code Yes [32] Github repository for the PPM projection algorithm, https://github.com/bentoayr/ efficient-projection-onto-the-perfect-phylogeny-model, Accessed: 2018-10-26.
Open Datasets No The paper generates data: 'random Galton Watson input tree truncated to have q = 1000 nodes, with the number of children of each node chosen uniformly within a fixed range, and for a random input ˆF Rq, with entries chosen i.i.d. from a normal distribution.' No public dataset or access information is provided.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology).
Hardware Specification Yes We ran all algorithms on a single core of an Intel Core i5 2.5GHz processor.
Software Dependencies No The paper only states, 'We implemented all the algorithms in C', without providing specific version numbers for compilers, libraries, or other software dependencies.
Experiment Setup No The paper mentions tuning ADMM and PGD but does not provide specific hyperparameter values or detailed training configurations for its own algorithm or the compared methods.