Latent Dependency Forest Models
Authors: Shanbo Chu, Yong Jiang, Kewei Tu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show that LDFMs are competitive with existing probabilistic models. |
| Researcher Affiliation | Academia | Shanbo Chu and Yong Jiang and Kewei Tu School of Information Science and Technology Shanghai Tech University, Shanghai, China {chushb, jiangyong, tukw}@shanghaitech.edu.cn |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of their proposed method (LDFM). |
| Open Datasets | Yes | We picked nine BNs that are frequently used in the BN learning literature from bnlearn (http://www.bnlearn.com/bnrepository/), a popular BN repository. |
| Dataset Splits | Yes | For each BN, we sampled two training sets of 5000 and 500 instances, one validation set of 1000 instances, and one testing set of 1000 instances. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Libra toolkit' but does not provide specific version numbers for it or any other software dependencies. |
| Experiment Setup | No | The paper mentions tuning hyperparameters for other models and using EM for LDFM, but it does not provide concrete hyperparameter values or detailed training configurations. |