Language-Guided Transformer for Federated Multi-Label Classification
Authors: I-Jieh Liu, Ci-Siang Lin, Fu-En Yang, Yu-Chiang Frank Wang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on various multi-label datasets (e.g., FLAIR, MSCOCO, etc.), we show that our Fed LGT is able to achieve satisfactory performance and outperforms standard FL techniques under multi-label FL scenarios. Code is available at https://github.com/Jack24658735/Fed LGT. |
| Researcher Affiliation | Collaboration | I-Jieh Liu1, Ci-Siang Lin1,2, Fu-En Yang1,2, Yu-Chiang Frank Wang1,2 1Graduate Institute of Communication Engineering, National Taiwan University 2NVIDIA |
| Pseudocode | Yes | The pseudo-code of our proposed framework is described in Algorithm 1. |
| Open Source Code | Yes | Code is available at https://github.com/Jack24658735/Fed LGT. |
| Open Datasets | Yes | we conduct extensive evaluations on various benchmark datasets, including FLAIR (Song, Granqvist, and Talwar 2022), MS-COCO (Lin et al. 2014), and PASCAL VOC (Everingham et al. 2015). |
| Dataset Splits | No | The paper does not provide specific percentages or counts for training, validation, and test splits for the datasets. It mentions 'FLAIR provides real-user data partitions' but no details on the splits. |
| Hardware Specification | Yes | For all experiments, we implement our model by Py Torch and conduct training on a single NVIDIA RTX 3090Ti GPU with 24GB memory. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the implementation framework and 'Adam' as the optimizer, but does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | Yes | set the threshold τ to 0.5 and the uncertainty margin ε is 0.02. For each round of local training, we train 5 epochs using the Adam (Kingma and Ba 2014) optimizer with a learning rate of 0.0001, and the batch size is set to 16. For the detail settings about FL, the communication round T is set to 50, and the fraction of active clients in each round is designed to achieve a level of participation equivalent to 50 clients, thus ensuring the data distribution is representative of the overall population. |