CalFAT: Calibrated Federated Adversarial Training with Label Skewness
Authors: Chen Chen, Yuchen Liu, Xingjun Ma, Lingjuan Lyu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on 4 benchmark vision datasets across various settings prove the effectiveness of our Cal FAT and its superiority over existing FAT methods. Evaluation on different datasets. Table 1 shows the results of all methods on CIFAR10, CIFAR100, SVHN, and Image Net subset. Ablation Studies. |
| Researcher Affiliation | Collaboration | Chen Chen Zhejiang University Yuchen Liu Zhejiang University Xingjun Ma Fudan University Lingjuan Lyu Sony AI |
| Pseudocode | Yes | Algorithm 1 Local training of Cal FAT |
| Open Source Code | No | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] |
| Open Datasets | Yes | Our experiments are conducted on 4 real-world datasets: CIFAR10 [13], CIFAR100 [13], SVHN [25], and Image Net subset [6]. |
| Dataset Splits | No | The paper states it uses standard datasets like CIFAR10, CIFAR100, SVHN, and ImageNet subset, and that it simulates label skewness using Dirichlet distribution. It mentions 'Communication round' and 'local epoch number E' for training. However, the main text does not explicitly provide the specific percentages or counts for training, validation, and test splits. It refers to 'Appendix D.1' for 'More detailed experimental setup' but this is not provided in the supplied text. |
| Hardware Specification | No | The paper states in its self-assessment (Question 3d) that it included the total amount of compute and type of resources used. However, within the provided text, there are no specific details regarding the hardware used for the experiments, such as GPU models, CPU types, or memory specifications. It only states that 'More detailed experimental setup is provided in Appendix D.1.', but Appendix D.1 details are not included in the provided text. |
| Software Dependencies | No | The paper does not explicitly list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or other libraries) used in the experiments. |
| Experiment Setup | Yes | The paper describes 'for local epoch=1, ..., E' in Algorithm 1, indicating the number of local epochs. In Section 2.2, it defines hyperparameters for PGD, stating 'K is the total number of steps (i.e., exj = x(K) j )' and 'α > 0 is the step size'. It also mentions 'More detailed experimental setup is provided in Appendix D.1'. |