Learning Bayesian Networks in the Presence of Structural Side Information
Authors: Ehsan Mokhtarian, Sina Akbari, Fateme Jamshidi, Jalal Etesami, Negar Kiyavash7814-7822
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we evaluate the performance and the scalability of our algorithms in both synthetic and real-world structures and show that they outperform the state-of-the-art structure learning algorithms. |
| Researcher Affiliation | Academia | Ehsan Mokhtarian,1 Sina Akbari,1 Fateme Jamshidi,2 Jalal Etesami,1 Negar Kiyavash 1,2 1 Department of Computer and Communication Science, EPFL, Lausanne, Switzerland 2 College of Management of Technology, EPFL, Lausanne, Switzerland {ehsan.mokhtarian, sina.akbari, fateme.jamshidi, seyed.etesami, negar.kiyavash}@epfl.ch |
| Pseudocode | Yes | Algorithm 1: Recursive Structure Learning (RSL). [...] Algorithm 5: Learns BN without side information. |
| Open Source Code | Yes | The MATLAB implementation of our algorithms is publicly available at https://github.com/Ehsan-Mokhtarian/RSL. |
| Open Datasets | Yes | Figure 3 illustrates the performance of BN learning algorithms on two real-world structures, namely Diabetes (Andreassen et al. 1991) and Andes (Conati et al. 1997) networks, over a range of different sample sizes. |
| Dataset Splits | No | The paper describes generating samples for experiments and refers to a 'finite sample setting', but it does not specify the exact percentages or counts for training, validation, and test splits, nor does it detail a cross-validation setup for its experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions 'The MATLAB implementation of our algorithms' and 'Fisher Z-transformation' but does not provide specific version numbers for MATLAB or other key software dependencies. |
| Experiment Setup | Yes | The samples are generated using a linear model where each variable is a linear combination of its parents plus an exogenous noise variable; the coefficients are chosen uniformly at random from [ 1.5, 1] [1, 1.5], and the noises are generated from N(0, σ2), where σ is selected uniformly at random from [ 1.5]. As for the CI tests, we use Fisher Z-transformation (Fisher 1915) with significance level 0.01 in the algorithms (alternative values did not alter our experimental results) and 2 n2 for Mb discovery (Pellet and Elisseeff 2008). |