Robustifying Algorithms of Learning Latent Trees with Vector Variables
Authors: Fengzhuo Zhang, Vincent Tan
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present simulation results to demonstrate the efficacy of the robustified algorithms. Samples are generated from a HMM with lmax = 3 and Diam(T) = 80. The Robinson-Foulds distance [21] between the true and estimated trees is adopted to measure the performances of the algorithms. For the implementations of CLRG and RG, we use the code from [4]. Other settings and more extensive experiments are given in Appendix L. Fig. 2 (error bars are in Appendix L.1) demonstrates the superiority of RCLRG in learning HMMs compared to other algorithms. The robustified algorithms also result in smaller estimation errors (Robinson-Foulds distances) compared to their unrobustified counterparts in presence of corruptions. |
| Researcher Affiliation | Academia | Fengzhuo Zhang Department of Electrical and Computer Engineering National University of Singapore fzzhang@u.nus.edu Vincent Y. F. Tan Department of Electrical and Computer Engineering Department of Mathematics National University of Singapore vtan@nus.edu.sg |
| Pseudocode | Yes | Algorithm 1 |
| Open Source Code | No | The paper states, 'For the implementations of CLRG and RG, we use the code from [4].' This indicates usage of existing code rather than the release of their own. No explicit statement of code release or a link to their source code repository is provided for the methodology described. |
| Open Datasets | No | The paper states, 'Samples are generated from a HMM with lmax = 3 and Diam(T) = 80,' indicating synthetic data generation for experiments rather than the use of a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper discusses generating 'n i.i.d. samples' and varying the 'Number of samples' in its experimental results, but it does not provide specific details on how these samples were partitioned into training, validation, or test sets for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'the code from [4]' for certain implementations but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | Samples are generated from a HMM with lmax = 3 and Diam(T) = 80. ... In our simulations, A is set to 60, and the number of corruptions n1 is 100. Other settings and more extensive experiments are given in Appendix L. |