Bridging Gaps: Federated Multi-View Clustering in Heterogeneous Hybrid Views

Authors: Xinyue Chen, Yazhou Ren, Jie Xu, Fangfei Lin, Xiaorong Pu, Yang Yang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical analysis and extensive experiments demonstrate that our method can handle the heterogeneous hybrid views in Fed MVC and outperforms state-of-the-art methods.
Researcher Affiliation Academia Xinyue Chen1 Yazhou Ren1,2, Jie Xu1 Fangfei Lin1 Xiaorong Pu1,2 Yang Yang1 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, China; 2Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China, China
Pseudocode Yes A Framework of the Proposed Algorithm Algorithm 1 outlines the execution flow for both clients and the server in FMCSC.
Open Source Code Yes The code is available at https://github.com/5Martina5/FMCSC.
Open Datasets Yes Datasets. Our experiments are carried out on four multi-view datasets. Specifically, MNIST-USPS [34] comprises 5000 samples collected from two handwritten digital image datasets, which are considered as two views. BDGP [4] consists of 2500 samples across 5 drosophila categories, with each sample having textual and visual views. Multi-Fashion [41] contains images from 10 categories, where we treat three different styles of one object as three views, resulting in 10000 samples. NUSWIDE [9] consists of 5000 samples obtained from web images with 5 views.
Dataset Splits No No explicit validation set splits (e.g., specific percentages or sample counts for a dedicated validation set) are detailed in the paper. The paper mentions training and testing but does not explicitly describe a validation split methodology.
Hardware Specification Yes The models of all methods are implemented on the Py Torch [33] platform using NVIDIA RTX-3090 GPUs.
Software Dependencies No The paper mentions 'Py Torch [33]' as the platform used for implementation, but it does not specify a version number for PyTorch or any other software dependencies such as ReLU or Adam.
Experiment Setup Yes For all the datasets used, the learning rate is fixed at 0.0003, the batch size is set to 256, and the temperature parameters τm and τp are both set to 0.5. Local pre-training is performed for 250 epochs on all datasets. After each communication round between the server and clients, local training is conducted for 10 epochs for the BDGP dataset and 25 epochs for other datasets on each client. The communication rounds between the server and clients are set to R = 5.