Gromov-Wasserstein Discrepancy with Local Differential Privacy for Distributed Structural Graphs
Authors: Hongwei Jin, Xun Chen
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We will perform two tasks, one with graph classification under federated learning via GW, where we take the benchmark datasets from TUDataset [Morris et al., 2020]. Another task is to take the graph clustering with subgraph generated from the single big graph, where we take the citation benchmark dataset from Planetoid [Yang et al., 2016]. We report the classification accuracy in Table 1. |
| Researcher Affiliation | Collaboration | Hongwei Jin1 , Xun Chen2 1University of Illinois at Chicago 2Samsung Research America, inc. |
| Pseudocode | Yes | Algorithm 1 LDPG( , ε): Multi-Bit Encoder |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its own source code or a link to a code repository. |
| Open Datasets | Yes | Dataset. We will perform two tasks, one with graph classification under federated learning via GW, where we take the benchmark datasets from TUDataset [Morris et al., 2020]. Another task is to take the graph clustering with subgraph generated from the single big graph, where we take the citation benchmark dataset from Planetoid [Yang et al., 2016]. |
| Dataset Splits | Yes | Regarding the task of graph classification under a centralized setting, we adopt an SVM using the indefinite kernel matrix e γFGW as a distance between graphs [Titouan et al., 2019], where the training, validation, and testing sets are split in the ratio of 7 : 2 : 1. For the decentralized setting, we split the graphs into 10 clients from the Dirichlet distribution [Wang et al., 2020], and each client has 20% and 10% for validation and testing, respectively. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as particular GPU or CPU models, memory configurations, or cloud computing resources. |
| Software Dependencies | No | The paper mentions software components like 'Fed ML', 'SVM', 'GCN', and 'GIN', but does not provide specific version numbers for these or any other software dependencies, which are necessary for reproducibility. |
| Experiment Setup | Yes | For all methods using SVM, we cross validate the parameter C 10 7, 10 6, , 107 and γ 2 10, 2 9, , 210. GCN indicates a two-layer graph convolutional layer with hidden dimension of 16, followed by an average pooling method. GIN represent a five graph isomorphic layers with hidden dimension of 64, followed by a two linear perception layers. We extract the node embedding after the softmax layer, and leave α = 0, β = 1. The default value is ε = 1 |V| which varies from each graph. A shared 2-layer GCN model with node classification task is deployed in the federated setting, where each client builds the node embedding. We retrieve the node embedding after a fixed number of 200 rounds of updating the shared model. |