Model Transferability with Responsive Decision Subjects

Authors: Yatong Chen, Zeyu Tang, Kun Zhang, Yang Liu

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present synthetic experimental results on both simulated and real-world data sets. We report our results in Figure 4.
Researcher Affiliation Collaboration 1Department of Computer Science and Engineering, University of California, Santa Cruz, California, United States. 2Department of Philosophy, Carnegie Mellon University, Pennsylvania, United States. 3Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates. 4Byte Dance Research. Correspondence to: Yang Liu <yangliu@ucsc.edu>.
Pseudocode Yes Algorithm 1 One-point bandit gradient descent for performative prediction
Open Source Code Yes The details for reproducing our experimental results can be found at https://github.com/UCSC-REAL/Model_ Transferability.
Open Datasets Yes In particular, we use the FICO credit score data set (Board of Governors of the Federal Reserve System (US), 2007) which contains more than 300k records of Trans Union credit scores of clients from different demographic groups.
Dataset Splits No The paper mentions using a "training data set" but does not explicitly provide details about specific training/validation/test dataset splits, such as percentages, sample counts, or cross-validation setup, needed to reproduce the experiment.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., Python 3.8, PyTorch 1.9), needed to replicate the experiment.
Experiment Setup No The paper describes the models used (e.g., logistic regression) and how hS and hT are computed, but it does not specify concrete experimental setup details such as hyperparameter values (e.g., learning rate, batch size) or training configurations for these models.