What is being transferred in transfer learning?
Authors: Behnam Neyshabur, Hanie Sedghi, Chiyuan Zhang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To ensure that we are capturing a general phenomena, we look into target domains that are intrinsically different and diverse. We use CHEXPERT [Irvin et al., 2019] which is a medical imaging dataset of chest x-rays considering different diseases. We also consider DOMAINNET [Peng et al., 2019] datasets that are specifically designed to probe transfer learning in diverse domains. The domains range from real images to sketches, clipart and painting samples. |
| Researcher Affiliation | Industry | Google neyshabur@google.com Google Brain hsedghi@google.com Google Brain chiyuan@google.com |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We use CHEXPERT [Irvin et al., 2019] which is a medical imaging dataset of chest x-rays considering different diseases. We also consider DOMAINNET [Peng et al., 2019] datasets that are specifically designed to probe transfer learning in diverse domains. |
| Dataset Splits | No | The paper mentions training epochs and evaluating checkpoints, but does not provide specific percentages or counts for training, validation, or test dataset splits in the main text. It refers to Appendix A for details, but these are not accessible. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers (e.g., libraries, frameworks, or operating systems). |
| Experiment Setup | Yes | For CHEXPERT, finetune with base learning rate 0.1 is not shown as it failed to converge. The right pane shows the average training accuracy over 100 finetune epochs. |