Federated Graph Classification over Non-IID Graphs

Authors: Han Xie, Jing Ma, Li Xiong, Carl Yang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results and in-depth analysis demonstrate the effectiveness of our proposed frameworks.
Researcher Affiliation Academia Han Xie, Jing Ma, Li Xiong, Carl Yang Department of Computer Science, Emory University {han.xie, jing.ma, lxiong, j.carlyang}@emory.edu
Pseudocode No The paper describes algorithms and their steps but does not contain a formal pseudocode or algorithm block.
Open Source Code Yes We include our models and code in the supplemental material
Open Datasets Yes We use a total of 13 graph classification datasets [30] from three domains including seven molecule datasets (MUTAG, BZR, COX2, DHFR, PTC_MR, AIDS, NCI1), three protein datasets (ENZYMES, DD, PROTEINS), and three social network datasets (COLLAB, IMDB-BINARY, IMDB-MULTI), each with a set of graphs. Node features are available in some datasets, and graph labels are either binary or multi-class. Details of the datasets are presented in Appendix C. [30] Christopher Morris, Nils M. Kriege, Franka Bause, Kristian Kersting, Petra Mutzel, and Marion Neumann. Tudataset: A collection of benchmark datasets for learning with graphs. In ICMLW, 2020.
Dataset Splits Yes The two important hyper-parameters ε1 and ε2 as clustering criteria vary in different groups of data, which are set through offline training for about 50 rounds following [33]... can be easily set through some simple experiments on the validation sets following [33].
Hardware Specification Yes We run all experiments for five random repetitions on a server with 8 × 24GB NVIDIA TITAN RTX GPUs.
Software Dependencies No The paper mentions using Adam optimizer but does not specify software dependencies with version numbers (e.g., Python, specific libraries like PyTorch or TensorFlow).
Experiment Setup Yes We use the three-layer GINs with hidden size of 64. We use a batch size of 128, and an Adam [20] optimizer with learning rate 0.001 and weight decay 5e-4. The μ for Fed Prox is set to 0.01. For all FL methods, the local epoch E is set to 1.