Robust Learning of Fixed-Structure Bayesian Networks in Nearly-Linear Time

Authors: Yu Cheng, Honghao Lin

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we present the first nearlylinear time algorithm for this problem with a dimension-independent error guarantee. Previous robust algorithms with comparable error guarantees are slower by at least a factor of (d/ϵ), where d is the number of variables in the Bayesian network and ϵ is the fraction of corrupted samples. Our algorithm and analysis are considerably simpler than those in previous work.
Researcher Affiliation Academia Yu Cheng Department of Mathematics (MSCS) University of Illinois at Chicago yucheng2@uic.edu Honghao Lin Institute for Theoretical Computer Science Shanghai University of Finance and Economics Guilan3010@gmail.com
Pseudocode Yes Algorithm 1: Robustly Learning Bayesian Networks
Open Source Code No The paper does not contain any statement about making its source code available or providing a link to a code repository.
Open Datasets No The paper discusses drawing N samples from a distribution P, but it refers to a theoretical setup for data generation and does not mention using or providing access to any specific publicly available datasets (e.g., MNIST, CIFAR-10) with links, DOIs, or citations.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with training, validation, and test dataset splits. It mentions a theoretical setting where N samples are drawn and an ϵ-fraction are corrupted, but this is not a practical dataset split for reproduction.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for running experiments. It mentions computational bottlenecks of existing algorithms but not its own experimental setup's hardware.
Software Dependencies No The paper is theoretical, describing algorithms and proofs. It does not specify any software dependencies with version numbers (e.g., Python, PyTorch, specific libraries or solvers) that would be needed to reproduce an implementation.
Experiment Setup No The paper is theoretical and focuses on algorithm design and analysis. It does not describe an empirical experimental setup, and therefore does not mention concrete hyperparameter values or system-level training settings.