Robust Learning of Fixed-Structure Bayesian Networks
Authors: Yu Cheng, Ilias Diakonikolas, Daniel Kane, Alistair Stewart
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments. We performed an experimental evaluation of our algorithm on both synthetic and real data. Our evaluation allowed us to verify the accuracy and the sample complexity rates of our theoretical results. In all cases, the experiments validate the usefulness of our algorithm, which significantly outperforms previous approaches, almost exactly matching the best rate without noise. |
| Researcher Affiliation | Academia | Yu Cheng Department of Computer Science Duke University Durham, NC 27708 yucheng@cs.duke.edu Ilias Diakonikolas Department of Computer Science University of Southern California Los Angeles, CA 90089 ilias.diakonikolas@gmail.com Daniel M. Kane Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093 dakane@ucsd.edu Alistair Stewart Department of Computer Science University of Southern California Los Angeles, CA 90089 stewart.al@gmail.com |
| Pseudocode | Yes | Algorithm 1 Filter-Known-Topology |
| Open Source Code | No | The paper does not mention providing open-source code for the described methodology. |
| Open Datasets | Yes | The ALARM network [BSCC89] is a classic Bayes net that implements a medical diagnostic system for patient monitoring. |
| Dataset Splits | No | The paper describes the generation of samples for synthetic and semi-synthetic experiments but does not provide specific details on train/validation/test dataset splits for model training or hyperparameter tuning. |
| Hardware Specification | Yes | All experiments were run on a laptop with 2.6 GHz CPU and 8 GB of RAM. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | In the synthetic experiment, we set ϵ = 0.1 and first generate a Bayes net P with 100 ≤ m ≤ 1000 parameters. We then generate N = 10m/ϵ2 samples, where a (1 − ϵ)-fraction of the samples come from the ground truth P, and the remaining ϵ-fraction come from a noise distribution. ... We draw the parameters of P independently from [0, 1/4] U [3/4, 1] uniformly at random... For ϵ = [0.05, 0.1, . . . , 0.4], we draw N = 10^6 samples, where a (1 − ϵ)-fraction of the samples come from ALARM, and the other ϵ-fraction comes from a noise distribution. |