Causal Imitability Under Context-Specific Independence Relations

Authors: Fateme Jamshidi, Sina Akbari, Negar Kiyavash

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluation is organized into two parts. In the first part, we address the decision problem pertaining to imitability. We evaluate the gain resulting from accounting for CSIs in rendering previously non-imitable instances imitable. In particular, we assess the classic imitability v.s. imitability under CSIs for randomly generated graphs. In the second part, we compare the performance of Alg. 2 against baseline algorithms on synthetic datasets (see Sec. D for further details of our experimental setup).
Researcher Affiliation Academia Fateme Jamshidi EPFL, Switzerland fateme.jamshidi@epfl.ch Sina Akbari EPFL, Switzerland sina.akbari@epfl.ch Negar Kiyavash EPFL, Switzerland negar.kiyavash@epfl.ch
Pseudocode Yes Algorithm 1 Imitation w.r.t. GL, Π; Algorithm 2 Imitation w.r.t. GL, Π, P(O); Algorithm 3 Find Possible π-backdoor admissible set
Open Source Code Yes Python implementation are accessible at https://github.com/SinaAkbarii/causal-imitation-learning/.
Open Datasets No No concrete access information (link, DOI, repository, formal citation) for a publicly available or open dataset was provided. The paper states, 'we worked with an SCM in which...' and proceeds to define the generative process for synthetic data.
Dataset Splits No The paper describes the generation of synthetic data but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined splits).
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments were provided in the paper.
Software Dependencies No The paper mentions 'Python implementation' but does not provide specific version numbers for Python or any other software dependencies, libraries, or solvers used in the experiments.
Experiment Setup No The paper describes the setup for generating the synthetic dataset (Section D) by listing probability distributions, but it does not provide specific experimental setup details such as hyperparameters (e.g., learning rates, batch sizes, number of epochs) or training configurations for the algorithms themselves.