Efficient Enumeration of Markov Equivalent DAGs

Authors: Marcel Wienöbst, Malte Luttermann, Max Bannach, Maciej Liskiewicz

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we present the first linear-time delay algorithm. On the theoretical side, we show that our algorithm can be generalized to enumerate DAGs represented by models that incorporate background knowledge, such as MPDAGs; on the practical side, we provide an efficient implementation and evaluate it in a series of experiments.
Researcher Affiliation Academia Marcel Wienöbst1, Malte Luttermann2, Max Bannach1, Maciej Li skiewicz1 1 Institute for Theoretical Computer Science, University of Lübeck, Germany 2 Institute of Information Systems, University of Lübeck, Germany {wienoebst@tcs, luttermann@ifis, bannach@tcs, liskiewi@tcs}.uni-luebeck.de
Pseudocode Yes Algorithm 1: Linear-time delay algorithm MCS-ENUM for listing all AMOs of a chordal graph.
Open Source Code Yes The implementations of the algorithms are available at https://github.com/mwien/mec-enum.
Open Datasets No The paper describes generating instances as 'undirected graphs generated by randomly inserting edges, which do not violate chordality, until a graph with 3 n edges is reached.' It does not refer to a publicly available dataset with specific access information or a formal citation.
Dataset Splits No The paper does not describe specific train/validation/test dataset splits. It evaluates the performance of enumeration algorithms on randomly generated graphs rather than training and validating a model on pre-defined data splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper states that the algorithms were 'implemented in Julia (Bezanson et al. 2017)' but does not provide specific version numbers for Julia or any other software dependencies.
Experiment Setup No The paper describes how graph instances were generated for the experiments (random chordal graphs) but does not provide specific experimental setup details such as hyperparameters, optimization settings, or other system-level training configurations, as it is not a machine learning model training paper.