Domain Generalization via Heckman-type Selection Models
Authors: Hyungu Kahng, Hyungrok Do, Judy Zhong
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also demonstrate its efficacy empirically through simulations and experiments on a set of benchmark datasets comparing with other well-known DG methods. |
| Researcher Affiliation | Academia | 1Korea University 2NYU School of Medicine hgkahng@korea.ac.kr, {hyungrok.do,judy.zhong}@nyulangone.org |
| Pseudocode | Yes | Algorithm 1 Two-Step Optimization for Heckman DG |
| Open Source Code | Yes | code available: https://github.com/hgkahng/domain-generalization-lightning |
| Open Datasets | Yes | To further demonstrate the effectiveness of Heckman DG on high-dimensional data regimes, we conducted experiments on four datasets from the WILDS benchmark (Koh et al., 2021): 1) CAMELYON17, 2) POVERTYMAP, 3) IWILDCAM, and 4) RXRX1. |
| Dataset Splits | Yes | Detailed descriptions of dataset statistics are presented in Table 5 of Appendix A.4. In the Domain row, the three numbers in parentheses denote the number of train, validation, and test domains. (e.g., CAMELYON17: 5 Hospitals (3, 1, 1)) |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU models, CPU models, or detailed specifications of computing resources. |
| Software Dependencies | No | The paper mentions software components like 'Dense Net-121', 'Res Net-18-MS', 'Res Net-50', 'Adam', and 'SGD' but does not specify version numbers for these or other key software dependencies (e.g., Python, PyTorch, CUDA). |
| Experiment Setup | Yes | Details on training configurations of Heckman DG are provided in Table 6. This includes parameters such as 'Epochs', 'Batch Size', 'Learning Rate', 'Weight Decay', 'Image Net Weights', and 'Data Augmentation'. |