Counterfactual Plans under Distributional Ambiguity
Authors: Ngoc Bui, Duy Nguyen, Viet Anh Nguyen
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 5, we conduct experiments on both synthetic and real-world datasets to demonstrate the efficiency of our corrections and of our COPA framework. |
| Researcher Affiliation | Industry | Ngoc Bui, Duy Nguyen, Viet Anh Nguyen Vin AI Research, Vietnam |
| Pseudocode | No | The paper describes methods such as the COPA framework using textual descriptions (e.g., 'The COPA problem (6) can be solved efficiently under mild conditions using a projected (sub)gradient descent algorithm.'), but it does not include explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Source code can be found at https://github.com/ngocbh/COPA. |
| Open Datasets | Yes | We use three real-world datasets: German Credit (Dua & Graff, 2017; Groemping, 2019), Small Bussiness Administration (SBA) (Li et al., 2018), and Student performance (Cortez & Silva, 2008). |
| Dataset Splits | Yes | For each present dataset D1, we train a logistic classifier Cθ0 with parameter θ0 on 80% of instances of the dataset and fix this classifier to construct counterfactual plans in whole experiment. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for its experiments, such as GPU models, CPU specifications, or cloud computing instance types. |
| Software Dependencies | Yes | The paper mentions using 'MOSEK' as a solver, citing 'MOSEK Optimizer API for Python 9.2.10, 2019', which includes a specific version number. It also states 'In our COPA framework, we use Adam optimizer to implement Projected Gradient Descent', and uses 'Logistic Regression' and 'three-layer MLP' for classifiers, though without version numbers for these libraries. |
| Experiment Setup | Yes | Throughout the experiments, we set the number of counterfactuals to J = 5. For Di CE, we use the default parameters recommended in the Di CE source code. The Mahalanobis correction will use the counterfactual plan obtained by the Di CE method with K = 3 and the perturbation limit is 0.1. In our COPA framework, we use Adam optimizer to implement Projected Gradient Descent and ℓ2-distance to compute perturbation cost between inputs. In this experiment, we run our COPA framework with λ1 = 2.0, λ2 = 200.0. |