Causal Regularization

Authors: Dominik Janzing

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Section 4 describes some empirical results. and 4 Experiments
Researcher Affiliation Industry Dominik Janzing Amazon Research Tübingen Germany janzind@amazon.com
Pseudocode Yes Our confounder correction algorithm reads: Algorithm Con Corr
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes Taste of wine This data has been extracted from the UCI machine learning repository [22] for the experiments in [14]. The cause X contains 11 ingredients of different sorts of red wine and Y is the taste assigned by human subjects. and [22] D. Dua and C. Graff. UCI machine learning repository, 2017. http://archive.ics.uci. edu/ml.
Dataset Splits Yes We have used leave-one-out CV from the Python package scikit for Ridge and Lasso, respectively.
Hardware Specification No No specific hardware details (such as GPU/CPU models, memory, or cloud instances) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper mentions 'Python package scikit' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For some fixed values of d = ℓ= 30, we generate one mixing matrix M in each run by drawing its entries from the standard normal distribution. In each run we generate n = 1000 instances of the ℓ-dimensional standard normal random vector Z and compute the X values by X = ZM. Afterwards we draw the entries of c and a from N(0, σ2 c) and N(0, σ2 a), respectively, after choosing σa and σc from the uniform distribution on [0, 1]. Finally, we compute the values of Y via Y = Xa + Zc + E, where E is random noise drawn from N(0, σ2 E) (the parameter σE has previously been chosen uniformly at random from [0, 5], which yields quite noisy data).