The s-value: evaluating stability with respect to distributional shifts

Authors: Suyash Gupta, Dominik Rothenhäusler

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of the proposed measure on real data and show that it can elucidate the distributional instability of a parameter with respect to certain shifts and can be used to improve estimation accuracy under shifted distributions. We will investigate the empirical performance of this two-stage approach in Section 5.
Researcher Affiliation Academia Suyash Gupta Department of Statistics Stanford University Stanford, CA 94305 suyash028@gmail.com Dominik Rothenhäusler Department of Statistics Stanford University Stanford, CA 94305 rdominik@stanford.edu
Pseudocode No The paper describes algorithms and procedures in text and mathematical formulations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the methodology described, nor does it include links to a code repository.
Open Datasets Yes Here, we analyze the stability of the average treatment effect estimator in the presence of covariate shift using the NSW dataset [29],... We evaluate the effectiveness of our method using the wine quality dataset from the UCI Machine Learning Repository [13, 18].
Dataset Splits Yes We obtain our training set by adding some proportion α of randomly chosen samples from the DJW subset to the DJWC subset, where α takes values in the set 0.05, 0.1, 0.2, 0.3, and use the remaining samples as the test set. We use all red wines as the training set and randomly select a proportion α from the white wines, where α {0.01, 0.05, 0.1}. The remaining observations are used for testing.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions using 'augmented inverse probability weighting (AIPW) using causal forests [50]' and 'ordinary least-squares regression', but it does not specify version numbers for any software dependencies, libraries, or programming languages used.
Experiment Setup No The paper discusses the methods used (AIPW with causal forests, OLS regression) and the datasets, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations for the models.