Ensemble Pruning for Out-of-distribution Generalization
Authors: Fengchun Qiao, Xi Peng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on common benchmarks demonstrate the superiority of our approach in both multiand single-source Oo D generalization. |
| Researcher Affiliation | Academia | 1Deep REAL Lab, Department of Computer and Information Sciences, University of Delaware, DE, USA. Correspondence to: Xi Peng <xipeng@udel.edu>. |
| Pseudocode | No | The paper describes mathematical formulations and procedural steps but does not include structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The source codes are publicly available at: https://github.com/joffery/TEP. |
| Open Datasets | Yes | We evaluate our method on the common Oo D generalization benchmark Domain Bed (Gulrajani & Lopez-Paz, 2020). VLCS contains photographic images from four domains: Caltech101, Label Me, SUN09, and VOC2007. Terra Incognita consists of photos of wild animals captured by camera traps at four different locations. |
| Dataset Splits | Yes | Following (Gulrajani & Lopez-Paz, 2020), we use a validation set selected from the training domains for model selection and all the experimental results are averaged over 3 trials. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions using ResNet-50 and standard SDP solvers but does not provide specific software library names with their version numbers. |
| Experiment Setup | Yes | We empirically set λ = 1 for all experiments. Following (He et al., 2024), we set K = N/2 for all experiments. |