Learning Large-Scale Poisson DAG Models based on OverDispersion Scoring

Authors: Gunwoong Park, Garvesh Raskutti

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide both theoretical guarantees and simulation results for both small and large-scale DAGs.
Researcher Affiliation Academia Gunwoong Park Department of Statistics University of Wisconsin-Madison Madison, WI 53706 parkg@stat.wisc.edu Garvesh Raskutti Department of Statistics Department of Computer Science Wisconsin Institute for Discovery, Optimization Group University of Wisconsin-Madison Madison, WI 53706 raskutti@cs.wisc.edu
Pseudocode Yes Algorithm 1: Over Dispersion Scoring (ODS)
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets No The paper states, 'The simulation study was conducted using 50 realizations of a p-node random Poisson DAG that was generated as follows.' It does not refer to a publicly available or open dataset, nor does it provide access to the generated data.
Dataset Splits No The paper describes using simulated data and sets parameters like 'c0 = 0.005' and 'λ = 0.1' for algorithms, but it does not specify explicit training, validation, or test dataset splits in the traditional sense for a fixed dataset.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions algorithms like GLMLasso [17], MMPC [15], HITON [13], PC [3], MMHC [15], GES [18], and SC [16], but it does not specify version numbers for these software components or libraries.
Experiment Setup Yes The simulation study was conducted using 50 realizations of a p-node random Poisson DAG that was generated as follows. The gj(.) functions for the general Poisson DAG model (1) was chosen using the standard GLM link function (i.e.gj(XPa(j)) = exp(θj + Pk Pa(j) θjk Xk)) resulting in the GLM DAG model (2). In all results presented (θjk) parameters were chosen uniformly at random in the range θjk [ 1, 0.7]... In our experiments, we always set the thresholding constant c0 = 0.005... 3 different algorithms are used for Step 1): GLMLasso [17] where we choose λ = 0.1; MMPC [15] with α = 0.005; and HITON [13] again with α = 0.005 and an oracle where the edges for the true moralized graph is used.