Support Consistency of Direct Sparse-Change Learning in Markov Networks

Authors: Song Liu, Taiji Suzuki, Masashi Sugiyama

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The KLIEP algorithm was experimentally demonstrated to be a promising method in sparse structure-change learning between two MNs (Liu et al. 2014). In this paper, we theoretically established sufficient conditions for its successful change detection. Our notable finding is that the number of samples needed for successful change detection is not dependent on the number of edges in each MN, but only on the number of changed edges between two MNs. We also provide numerical illustrations for our theories. One important consequence of Theorem 1 is that, for fixed d, the number of samples np required for detecting the sparse changes grows with log m2+m 2 . We now illustrate this effect via experiments. The first set of experiments are performed on fourneighbor lattice-structured MNs. We draw samples from a Gaussian lattice-structured MN P. Then we remove 4 edges randomly, to construct another Gaussian MN Q. We consider the scaling of m = 9, 16, 25, np [3000, 10000], and np = nq. As suggested by Theorem 1, λnp is set to a con-stant factor of 2 np . The rate of successful change detection versus the number of samples np normalized by log m2+m 2 is plotted in Figure 1(a). Each point corresponds to the probability of success over 20 runs.
Researcher Affiliation Academia Song Liu song@sg.cs.titech.ac.jp Tokyo Institute of Technology Taiji Suzuki s-taiji@is.titech.ac.jp Tokyo Institute of Technology Masashi Sugiyama sugi@k.u-tokyo.ac.jp University of Tokyo
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets No The paper uses synthetic data generated for the experiments and does not provide access information for a publicly available or open dataset. For example, it states: "We draw samples from a Gaussian lattice-structured MN P. Then we remove 4 edges randomly, to construct another Gaussian MN Q."
Dataset Splits No The paper does not specify distinct training, validation, or test dataset splits. It describes generating sample sizes np and nq for the Markov networks to evaluate the consistency of change detection, not a standard train/val/test split for model training and evaluation.
Hardware Specification No The paper does not provide any specific hardware details used for running its experiments. No mentions of CPU, GPU, or other computing resources with specifications are made.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes The first set of experiments are performed on fourneighbor lattice-structured MNs. We draw samples from a Gaussian lattice-structured MN P. Then we remove 4 edges randomly, to construct another Gaussian MN Q. We consider the scaling of m = 9, 16, 25, np [3000, 10000], and np = nq. As suggested by Theorem 1, λnp is set to a con-stant factor of 2 np . The rate of successful change detection versus the number of samples np normalized by log m2+m 2 is plotted in Figure 1(a). Each point corresponds to the probability of success over 20 runs. We next perform experiments on the non-Gaussian distribution with a diamond shape used in (Liu et al. 2014). The MNs are constructed in the same way as the previous experiment, while the samples are generated via slice sampling (Neal 2003).