Federated Compositional Deep AUC Maximization

Authors: Xinwen Zhang, Yihan Zhang, Tianbao Yang, Richard Souvenir, Hongchang Gao

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, extensive experimental results confirm the efficacy of our method. In this section, we present the experimental results to demonstrate the performance of our algorithm.
Researcher Affiliation Academia Xinwen Zhang Temple University Philadelphia, PA, USA ellenz@temple.edu Yihan Zhang Temple University Philadelphia, PA, USA yihan.zhang0002@temple.edu Tianbao Yang Texas A&M University College Station, TX, USA ianbao-yang@tamu.edu Richard Souvenir Temple University Philadelphia, PA, USA souvenir@temple.edu Hongchang Gao Temple University Philadelphia, PA, USA hongchang.gao@temple.edu
Pseudocode Yes Algorithm 1 Local SCGDAM
Open Source Code No The paper does not include an unambiguous statement about releasing source code or a direct link to a code repository for the methodology described.
Open Datasets Yes In our experiments, we employ six image classification datasets, including CIFAR10 [15], CIFAR100 [15], STL10 [1], Fashion MNIST [32], CATvs DOG 2, and Melanoma [22].
Dataset Splits No The paper mentions 'training set' and 'testing set' but does not specify explicit train/validation/test dataset splits by percentages or counts, nor does it describe a clear validation set methodology.
Hardware Specification No The paper states 'We use 4 devices (i.e., GPUs) in our experiment' but does not provide specific hardware details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions using models like ResNet20 and DenseNet121, and refers to 'the model provided by PyTorch examples' for Fashion MNIST, but it does not specify exact software dependencies with version numbers (e.g., PyTorch version, Python version, or other library versions).
Experiment Setup Yes The batch size on each device is set to 8 for STL10, 16 for Melanoma, and 32 for the others. For all methods, we use SGD with momentum as the optimizer. The momentum is set to 0.9. The weight decay is set to 0.0001. We use the cosine annealing learning rate scheduler with a warmup period of 5 epochs. The maximum learning rate is set to 0.1. The number of epochs is set to 100 for all datasets.