FedCD: Federated Semi-Supervised Learning with Class Awareness Balance via Dual Teachers
Authors: Yuzhi Liu, Huisi Wu, Jing Qin
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two medical datasets under various settings demonstrate the effectiveness of Fed CD. The code is available at https://github.com/Yunz Z-Liu/Fed CD. |
| Researcher Affiliation | Academia | Yuzhi Liu1, Huisi Wu1*, Jing Qin2 1 College of Computer Science and Software Engineering, Shenzhen University 2 Centre for Smart Health, The Hong Kong Polytechnic University hswu@szu.edu.cn |
| Pseudocode | Yes | Algorithm 1: The pipeline of unlabeled client |
| Open Source Code | Yes | The code is available at https://github.com/Yunz Z-Liu/Fed CD. |
| Open Datasets | Yes | We perform the HAM10000 dataset (Tschandl, Rosendahl, and Kittler 2018) for skin lesion classification, which contains 10015 images and 7 classes. For ICH diagnosis, we follow the setup in Fed IRM (Liu et al. 2021b) that randomly selects 25,000 images from the RSNA ICH dataset (Flanders et al. 2020), which consists of 5 subtypes. |
| Dataset Splits | Yes | For both benchmark datasets, we employ 70% for training, 10% for validation, and 20% for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with their specific versions. |
| Experiment Setup | Yes | The learning rates for labeled and unlabeled clients are 0.02 and 0.01 respectively. The batch size is 12 for the HAM10000 dataset and 24 for the RSNA ICH dataset. We set 1 local epoch for all clients and train for 1000 rounds (200 warm-ups). The loss function parameters λ1 and λ2 are both set to 0.02. We enpirically set temperature parameter τ = 0.5 , confident class threshold β = 0.4, warming-up weights α0 = 0.01 and αn = 0.1, low-rank threshold tl = 5 and high-rank threshold th = 6. |