Federated Split Task-Agnostic Vision Transformer for COVID-19 CXR Diagnosis

Authors: Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our framework can show stable performance even under non-IID settings... We compared the performance of classification model trained with FESTA with a data-centralized setting and other distributed learning strategies for COVID-19 classification task under the non-IID setting. As shown in Table 2, our method achieved comparable performance to the data-centralized learning method as well as outperformed the existing distributed learning methods...
Researcher Affiliation Academia Department of Bio and Brain Engineering 2Kim Jaechul Graduate School of AI, 3Deptartment of Mathematical Sciences Korea Advanced Institute of Science and Technology (KAIST)
Pseudocode Yes Algorithm 1: FESTA: Federated Split Task-Agnostic learning
Open Source Code No The paper mentions using a modified version of the Flower FL framework, but does not provide an explicit statement or link for their own open-source code for the methodology described.
Open Datasets Yes Table 1: Datasets and sources for COVID-19 diagnosis. We used both public datasets containing labels of infectious disease (Valencian Region Medical Image Bank [BIMCV] [12], Brixia [44, 4], National Institutes of Health [NIH] [55])
Dataset Splits Yes Overall, 17,183 PA view CXR images were used for training/validation and 365 PA view CXR images for the test... For the segmentation task... The training dataset was divided randomly with a 4:1 ratio into training and validation datasets... For the object detection task... We randomly divided the entire dataset with a 3:1 ratio into training and testing datasets
Hardware Specification Yes All experiments were performed with Python version 3.8 and Pytorch version 1.7 on Nvidia RTX 3090, 2080 Ti, and 1080 Ti.
Software Dependencies Yes All experiments were performed with Python version 3.8 and Pytorch version 1.7 on Nvidia RTX 3090, 2080 Ti, and 1080 Ti.
Experiment Setup Yes For all tasks, the batch size was 2 per client, and the warm-up step was 500. We set the number of total rounds to 12,000, and the weights of each clients head and tail underwent Fed Avg per 100 rounds by the server. ... the customized weights of 1:2:2 were applied for classification, segmentation, and detection tasks to update the common body weights. We divided the MTL into two steps, jointly training the task-specific heads, tails, and the task-agnostic body (6,000 rounds), and fine-tuning only the task-specific heads, tails with the body weights fixed (6,000 rounds).