CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph Convolutional Network Inference

Authors: Ran Ran, Wei Wang, Quan Gang, Jieming Yin, Nuo Xu, Wujie Wen

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Based on the NTU-XVIEW skeleton joint dataset, i.e., the largest dataset evaluated homomorphically by far as we are aware of, our experimental results demonstrate that Crypto GCN outperforms state-of-the-art solutions in terms of the latency and number of homomorphic operations, i.e., achieving as much as a 3.10 speedup on latency and reduces the total Homomorphic Operation Count (HOC) by 77.4% with a small accuracy loss of 1-1.5%.
Researcher Affiliation Collaboration Ran Ran Lehigh University rar418@lehigh.edu Nuo Xu Lehigh University nux219@lehigh.edu Wei Wang Microsoft wewang3@microsoft.com Gang Quan Florida International University gaquan@fiu.edu Jieming Yin Nanjing University of Posts and Telecommunications jieming.yin@njupt.edu.cn Wujie Wen Lehigh University wuw219@lehigh.edu
Pseudocode Yes Algorithm 1 AMA Formatting for Encryption; Algorithm 2 Activation Pruning
Open Source Code Yes Our code is publicly available at https://github.com/ranran0523/Crypto GCN.
Open Datasets Yes NTU-RGB+D [34] is the current largest dataset with 3D joint annotations for human action recognition task. It contains 56,880 action clips in 60 action classes. The annotations contain the 3D information (X, Y, Z) of 25 joints for each subject in the skeleton sequences. We choose one benchmark NTU-cross-View (NTU-XView) as the dataset for our evaluation because this benchmark
Dataset Splits No It contains 37,920 and 18,960 clips for training and evaluation, respectively. For better evaluation, we use 256 frames from the video clip as our input data.
Hardware Specification Yes Our experiments are conducted on a machine with AMD Ryzen Threadripper PRO 3975WX with a single thread setting.
Software Dependencies Yes We use Microsoft SEAL version 3.7.2 [33] to implement a RNS-variant of CKKS [6] scheme.
Experiment Setup Yes We use Stochastic Gradient Descent (SGD) optimizer with a mini-batch size of 64, a momentum of 0.9, and a weight decay of e 4 to train the model for 200 epochs. The initial learning rate is set to 0.01 with a decay factor 0.1.