PAC Generalization via Invariant Representations
Authors: Advait U Parulekar, Karthikeyan Shanmugam, Sanjay Shakkottai
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we empirically demonstrate that in the setting described above using a notion of generalization that we describe, most approximately invariant representations generalize to most new distributions. |
| Researcher Affiliation | Collaboration | 1Department of Electrical and Computer Engineering, University of Texas at Austin 2Google Research India. |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm block. |
| Open Source Code | Yes | The code is available at https://github.com/advaitparulekar/PAC IRM |
| Open Datasets | No | We consider the 7-node linear SEM in Figure 3. The target variable is taken to be Xt. Each edge weight is set to 1 for the observational distribution. |
| Dataset Splits | No | The paper discusses drawing training and test samples but does not specify a separate validation set or exact split percentages for reproduction. |
| Hardware Specification | No | The paper describes experiments but does not specify hardware details such as GPU/CPU models or memory. |
| Software Dependencies | No | The paper provides a link to its code but does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | We consider the 7-node linear SEM in Figure 3. The target variable is taken to be Xt. Each edge weight is set to 1 for the observational distribution. We consider an interventional distribution Dhard with support over the set of hard interventions on nodes {v3, v4, v5}. Recall that a hard intervention consists of assigning a value to a node. We draw m interventional distributions from Dhard as our training interventions, and draw a sample consisting of N = 200000 datapoints from each distribution. In our experiments, σ2 = 1. |