Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs
Authors: Mohammad Shahverdikondori, Ehsan Mokhtarian, Negar Kiyavash
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present a comprehensive evaluation of QWO, which is designed for the module that computes GĻ in permutation-based causal discovery methods in Li GAMs. We generated random graphs according to an Erdos-Renyi model with an average degree of i for each node, denoted by ERi. Two metrics were used to evaluate the performance of the methods: Skeleton F1 Score (SKF1): ... Complete PDAG SHD (PSHD): ... |
| Researcher Affiliation | Academia | Mohammad Shahverdikondori College of Management of Technology EPFL, Lausanne, Switzerland EMAIL Ehsan Mokhtarian School of Computer and Communication Sciences EPFL, Lausanne, Switzerland EMAIL Negar Kiyavash College of Management of Technology EPFL, Lausanne, Switzerland EMAIL |
| Pseudocode | Yes | Algorithm 1 The QWO module for computing and updating GĻ. Algorithm 2 Integrating QWO into a simple search method for causal discovery. |
| Open Source Code | Yes | The implementation is publicly available at https://github.com/ban-epfl/QWO. |
| Open Datasets | Yes | We evaluated the performance of QWO and other methods on small real-world graphs, namely ASIA [LS88], CANCER [KN10], SACHS [SPP+05], and SURVEY [SD21], as well as ER2 graphs with 5 nodes. |
| Dataset Splits | No | The paper mentions '5-fold cross-validation' in the context of the CV General method, which is a baseline. However, it does not specify explicit train/validation/test splits or cross-validation for its own experimental setup or data partitioning strategy. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, memory, or cloud resources) used for conducting the experiments. |
| Software Dependencies | No | The paper states: 'We used the implementations provided in the causal-learn library [ZHC+24] for BIC, BDeu, and CV General methods.' However, it does not provide specific version numbers for 'causal-learn' or any other software dependencies. |
| Experiment Setup | Yes | We generated random graphs according to an Erdos-Renyi model with an average degree of i for each node, denoted by ERi. The number of data samples for this part is set to 500. The depth of the DFS algorithm for GRa SP was set to 3, and the value of k (the maximum distance of swapped indices) in the HC algorithm was set to 5. |