Geometry-Aware Instrumental Variable Regression

Authors: Heiner Kremer, Bernhard Schölkopf

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments demonstrate that SMM is competitive with state-of-the-art IV estimators in standard settings and can provide an improvement in presence of corrupted data and adversarial attacks.
Researcher Affiliation Academia Heiner Kremer 1 Bernhard Sch olkopf 1 1Max Planck Institute for Intelligent Systems. Correspondence to: Heiner Kremer <heiner.kremer@gmail.com>.
Pseudocode Yes Algorithm 1 n-stage Kernel-SMM Input: Initial function f, hyperparameters ϵ, λ, γx for i = 1, . . . , n do Compute Q( f) while not converged do f Gradient Descent(f, f RQ( f)(f)) end while f f end for Output: Function estimate f
Open Source Code Yes Implementations of our estimators are available at https: //github.com/Heiner Kremer/sinkhorn-iv/.
Open Datasets No We consider the Simple IV experiment of Bennett & Kallus (2023) with the following data generating process, Z = sin(πZ0/10) (11) T = 0.75Z0 + 3.5H + 0.14η 0.6 Y = f(T; θ0) 10U + 0.1η2 where η1, η2, U N(0, I) and Z0 Uniform([ 5, 5]).
Dataset Splits No We pick the best hyperparameter configuration by evaluating the MMR objective (Zhang et al., 2023) on a validation data set of the same size as the training set.
Hardware Specification No The paper does not provide any specific hardware details used for running the experiments.
Software Dependencies No The paper mentions software components and methods like "radial basis function kernel," "limited memory BFGS method," and "optimistic Adam optimizer," but it does not specify version numbers for any of these software dependencies.
Experiment Setup Yes For SMM we choose the hyperparameters from the grid defined by ϵ [10 6, 10 4, 10 2] and λ/ϵ [10 6, 10 4, 10 2, 1.0]." and "We tuned the learning rates, for the model and adversary by evaluating the Deep GMM estimator for different values and fix them both to 5e 4 for all methods. In the same way we fix the batch size to 200 and the number of epochs to 3000.