Fed-GraB: Federated Long-tailed Learning with Self-Adjusting Gradient Balancer

Authors: Zikai Xiao, Zihan Chen, Songshang Liu, Hualiang Wang, YANG FENG, Jin Hao, Joey Tianyi Zhou, Jian Wu, Howard Yang, Zuozhu Liu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that Fed-Gra B achieves state-of-the-art performance on representative datasets such as CIFAR-10-LT, CIFAR-100-LT, Image Net-LT, and i Naturalist.
Researcher Affiliation Collaboration 1Zhejiang University, 2Singapore University of Technology and Design, 3The Hong Kong University of Science and Technology, 4Angelalign Technology Inc, 5State Key Laboratory of Oral Diseases, Sichuan University, 6Centre for Frontier AI Research (CFAR), A*STAR , Singapore
Pseudocode Yes Algorithm 1 Local training process of Fed-Gra B
Open Source Code Yes Our codes are available at https://github.com/Zack Zikai Xiao/Fed Gra B.
Open Datasets Yes Datasets: We conduct the experiments in three benchmark datasets for long-tailed classification, i.e., CIFAR-10/100-LT [57], Image Net-LT [58]. ... To evaluate the performance on real-world data, we also conduct experiments on i Naturalist-User-160k, with 160k examples of 1,023 species classes and partitioned on the basis of i Naturalist-2017 [59].
Dataset Splits No The paper mentions using benchmark datasets and discusses training and testing, but it does not explicitly state specific training, validation, and test dataset splits by percentages, sample counts, or clear citations to predefined splits for all three, necessary for reproduction beyond general dataset usage.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments, such as GPU or CPU models, or details about computational resources.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, PyTorch x.x) necessary to replicate the experiment environment.
Experiment Setup Yes Federated settings: We use non-IID data partitions for all experiments, implemented via symmetric Dirichlet distributions with concentration parameter α to control the identicalness of local data distributions among all the clients. We train a Res Net-18 over N = 40 clients on CIFAR-10-LT. Res Net-34 and Res Net-50 are used on CIFAR-100-LT and Image Net-LT respectively with N = 20 clients. For i Naturalist-160k, we use the same settings as Image Net-LT. ... For CRe FF, the number of federated features is 100, we use 0.1, 0.01 as federated feature learning rate and main net learning rate respectively on CIFAR-10/100-LT.