When Source-Free Domain Adaptation Meets Learning with Noisy Labels
Authors: Li Yi, Gezheng Xu, Pengcheng Xu, Jiaqi Li, Ruizhi Pu, Charles Ling, Ian McLeod, Boyu Wang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate significant improvements to existing SFDA algorithms by leveraging ETP to address the label noise in SFDA. |
| Researcher Affiliation | Academia | 1Department of Statistical and Actuarial Sciences 2Department of Computer Science University of Western Ontario |
| Pseudocode | Yes | Algorithm 1: SFDA ELR Source Free Domain Adaptation with ELR |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We use four benchmark datasets... Office-31 (Saenko et al., 2010), Office-Home (Venkateswara et al., 2017), Vis DA (Peng et al., 2017) and Domain Net (Peng et al., 2019). |
| Dataset Splits | No | The paper mentions using benchmark datasets but does not explicitly provide specific train/validation/test split percentages or sample counts, nor does it refer to predefined splits with citations for reproducibility. |
| Hardware Specification | No | The paper mentions ResNet architectures used as backbones but provides no specific details on the hardware (GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper discusses various methods and loss functions but does not specify software dependencies (e.g., programming languages, libraries, or frameworks) with version numbers. |
| Experiment Setup | Yes | We set the learning rate to 1e-4 for all layers except for the last two FC layers, where we apply 1e-3 for the learning rate for all datasets. The hyperparameter β is chosen from {0.5, 0.6, 0.7, 0.8, 0.9, 0.99}, and λ is chosen from {1, 3, 7, 12, 25}. Table 5: Optimal Hypermaraters (β/λ) on various datasets. |