Federated Learning with Positive and Unlabeled Data
Authors: Xinyang Lin, Hanting Chen, Yixing Xu, Chao Xu, Xiaolin Gui, Yiping Deng, Yunhe Wang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical experiments show that the Fed PU can achieve much better performance than conventional supervised and semi-supervised federated learning methods. Experiments on MNIST and CIFAR datasets empirically show that the proposed method can achieve better performance than existing federated learning algorithms. |
| Researcher Affiliation | Collaboration | 1Faculty of Electronic and Information Engineering, Xi an Jiaotong University 2Huawei Noah s Ark Lab 3Key Lab of Machine Perception (MOE), Department of Machine Intelligence, Peking University, China 4Central Software Institution, Huawei Technologies. |
| Pseudocode | Yes | Algorithm 1 The proposed Fed PU learning algorithm. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We conduct experiments on the MNIST dataset, which is composed of images with 28 28 pixels from 10 categories. The MNIST dataset consists of 60,000 training images and 10,000 testing images. ... We further evaluate our method on the CIFAR-10 dataset. The CIFAR-10 dataset consists of 50,000 training images and 10,000 testing images with size 32 32 3 from 10 categories. |
| Dataset Splits | No | The paper specifies training and testing sets for MNIST and CIFAR-10, but it does not explicitly mention a distinct validation set or its split details. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions "The SGD optimizer is used to train the network with momentum 0.5" but does not specify any software names with version numbers, such as programming languages or libraries. |
| Experiment Setup | Yes | The SGD optimizer is used to train the network with momentum 0.5. For federated learning, we set the communication round as 200. For each client, the local epoch and local batchsize for training the network in each round is set as 1 and 100. The learning rate is initialized as 0.01 and exponentially decayed by 0.995 over communication rounds on the MNIST dataset. |