Structure Learning with Side Information: Sample Complexity

Authors: Saurabh Sihag, Ali Tajer

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, in sharp contrast, we provide an information-theoretic perspective for jointly learning the structures of a pair of similar Ising models... Furthermore, we also analyze a maximum likelihood (ML)-based graph decoder to establish sufficient conditions on the sample complexity... Finally, we provide numerical evaluations of ML-based decoder to study the effect of structural similarity on its performance. The sufficient conditions are obtained by simulations of the ML decoder. The figure shows these variations for different levels of graph similarity η = 0.1, 0.3 and different values of recovery approximation d = 1, 3. The probability of error was evaluated empirically over 6000 trials.
Researcher Affiliation Academia Saurabh Sihag Ali Tajer Electrical, Computer, and Systems Engineering Department Rensselaer Polytechnic Institute
Pseudocode No The paper describes the ML graph decoder conceptually but does not provide structured pseudocode or an algorithm block.
Open Source Code No The paper does not contain any statements about making source code for the described methodology publicly available, nor does it provide a link to a code repository.
Open Datasets No The paper uses generated samples for its numerical evaluations, stating: "The probability of error was evaluated empirically over 6000 trials." It does not refer to a publicly available dataset with concrete access information (link, DOI, repository, or formal citation).
Dataset Splits No The paper describes its numerical evaluations using generated samples but does not provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or references to predefined splits).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, cloud instances) used to run the numerical evaluations or simulations.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or specialized solvers) used for the experiments.
Experiment Setup Yes We first considered a graph with p = 100 vertices and α = 20... For the results in this figure we have fixed α = 20 and η = 0.5, and have evaluated the performance based on n = 40 samples from each graph. In this ensemble, we set the size of the graphs to p, with q = ηp nodes in the shared subgraph. We assumed that each graph contains α isolated edges, with ηα edges lying in the shared subgraph. Furthermore, the graphs in this ensemble were constructed in the following manner. We grouped the non-shared cluster with size (p q) vertices in (p q)/2 fixed pairs and randomly connected the vertices in (α ηα ). Similarly, the q vertices of the shared subgraph were grouped into q/2 fixed pairs and ηα pairs were selected randomly to be connected.