Practical Privacy-Preserving MLaaS: When Compressive Sensing Meets Generative Networks
Authors: Jia Wang, Wuqiang Su, Zushu Huang, Jie Chen, Chengwen Luo, Jianqiang Li
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results confirmed its performance superiority in accuracy and resource consumption against state-of-the-art privacy-preserving MLaa S frameworks. In our experiments, we use three image datasets to evaluate the performance of our model: the MNIST dataset of handwritten digits (Lecun et al. 1998), the Street View House Numbers (SVHN) dataset (Netzer et al. 2011), and the CIFAR-10 dataset (Krizhevsky, Hinton et al. 2009). |
| Researcher Affiliation | Academia | Jia Wang1, Wuqiang Su1, Zushu Huang1, Jie Chen 1, Chengwen Luo2, Jianqiang Li2 * 1 College of Computer Science and Software Engineering, Shenzhen University 2 National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University 3688 Nanhai Avenue, Shenzhen, Guangdong Province, China {jia.wang, suwuqiang2019}@szu.edu.cn, 2060271076@email.szu.edu.cn, {chenjie, chengwen, lijq}@szu.edu.cn |
| Pseudocode | Yes | Algorithm 1: Algorithm of adding noise to original signals of the training dataset |
| Open Source Code | No | More details of the hyper-parameters can be found in the code repository. (No specific link or explicit statement of code availability for their work is provided.) |
| Open Datasets | Yes | In our experiments, we use three image datasets to evaluate the performance of our model: the MNIST dataset of handwritten digits (Lecun et al. 1998), the Street View House Numbers (SVHN) dataset (Netzer et al. 2011), and the CIFAR-10 dataset (Krizhevsky, Hinton et al. 2009). |
| Dataset Splits | Yes | While the test set of all datasets is kept the same, the MNIST, SVHN and CIFAR-10 datasets took out 10,000, 10,000 and 5,000 images from their respective training sets as their respective validation sets. |
| Hardware Specification | No | The paper does not provide specific hardware details for running its experiments. |
| Software Dependencies | No | Our implementation is based on Tensor Flow (no version specified). |
| Experiment Setup | Yes | In all experiments, the measurement matrix is chosen to be a Gaussian random matrix and the Adam optimizer (Kingma and Ba 2015) is used to train the model. In all experiments of DCMG-O and DCMGN, we set λ = 100, 000. In all experiments of DCMGN, for making training dataset we set λnoise = 2, p = 0.5, c = n m for MNIST and λnoise = 1, p = 0.5, c = n m for SVHN and CIFAR-10. |