Federated Black-Box Adaptation for Semantic Segmentation
Authors: Jay Paranjape, Shameema Sikder, S. Vedula, Vishal Patel
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on several computer vision and medical imaging datasets to demonstrate its effectiveness. |
| Researcher Affiliation | Academia | Jay N. Paranjape Dept. of Electrical and Computer Engineering The Johns Hopkins University Baltimore, USA jparanj1@jhu.edu Shameema Sikder Wilmer Eye Institute The Johns Hopkins University Baltimore, USA S. Swaroop Vedula Malone Center for Engineering in Healthcare The Johns Hopkins University Baltimore, USA Vishal M. Patel Dept. of Electrical and Computer Engineering The Johns Hopkins University Baltimore, USA |
| Pseudocode | Yes | Algorithm 1 Proposed Algorithm for Black Fed v1 |
| Open Source Code | Yes | Code: https://github.com/Jay Paranjape/blackfed/tree/master |
| Open Datasets | Yes | For evaluating our method, we consider four publicly available datasets, namely (i) Cityscapes [9] (ii) CAMVID [4], (iii) ISIC [21, 8, 45] and (iv) Polypgen [2]. |
| Dataset Splits | Yes | While CAMVID has predefined train, test and validation splits, for Cityscapes, we divide the data from each client into training, validation and testing splits for that client in a 60:20:20 ratio. In this manner, we generate 18 clients for Cityscapes and 4 clients for CAMVID. |
| Hardware Specification | Yes | All experiments are done using a single Nvidia RTX A5000 GPU per client and a single Nvidia RTX A5000 GPU at the server. |
| Software Dependencies | No | The paper mentions optimizers (Adam, SPSA-GC) but does not provide specific version numbers for software dependencies like programming languages or deep learning frameworks. |
| Experiment Setup | Yes | During training, we use c_e = 10 and s_e = 10. The server is optimized using an Adam optimizer and the client is optimized using SPSA-GC. The learning rates of both the client and the server are set to 10 4, based on validation set performance. The batch size for all experiments is 8, and all images undergo random brightness perturbation with brightness parameter set as 2. The images for Cityscapes and CAMVID are resized to 256 512 to maintain their aspect ratio, whereas the images for ISIC and Polypgen are resized to 256 256. |