Automated Efficient Estimation using Monte Carlo Efficient Influence Functions

Authors: Raj Agrawal, Sam Witty, Andy Zane, Elias Bingham

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show empirically that estimators using MC-EIF are at parity with estimators using analytic EIFs. Finally, we present a novel capstone example using MC-EIF for optimal portfolio selection.
Researcher Affiliation Collaboration Raj Agrawal Basis Research Institute, Broad Institute raj@basis.ai Sam Witty Basis Research Institute, Broad Institute sam@basis.ai Andy Zane Basis Research Institute, UMass Amherst andy@basis.ai Eli Bingham Basis Research Institute, Broad Institute eli@basis.ai
Pseudocode Yes Algorithm 1 MC-EIF one step estimator
Open Source Code Yes Our MC-EIF implementation is publicly available in the Python package Chi Rho. All results shown here are end-to-end reproducible.
Open Datasets Yes All influence function computations are relative to an initial point estimate ˆϕ, found through maximum a posteriori estimation using 500 training datapoints.
Dataset Splits Yes Algorithm 1 MC-EIF one step estimator Input: Target functional ψ, initial estimate of parameters ˆϕ, held out datapoints {xn}N n=N/2+1, Number of Monte Carlo samples M
Hardware Specification Yes All experiments were run on an Apple M2 pro. In Figure 8, we plot the runtime of our method under various conditions.
Software Dependencies No The paper mentions software like 'pytorch' and 'Pyro' but does not specify their version numbers for reproducibility, which is required for a 'Yes' answer.
Experiment Setup Yes In Section 5, we consider the following model with confounders c, treatment t, and response y: µ0 N(0, 1), (intercept) , (outcome weights) , (propensity weights) τ N(0, 1), (treatment weight) cn N(0, ID), (confounders) tn | cn, π Bernoulli(logits = πT cn), (treatment assignment) yn N(τtn + ξT cn + µ0, 1), (response)