Interpretable Complex-Valued Neural Networks for Privacy Protection

Authors: Liyao Xiang, Hao Zhang, Haotian Ma, Yifan Zhang, Jie Ren, Quanshi Zhang

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Preliminary experiments on various datasets and network structures have shown that our method significantly diminishes the adversary s ability in inferring about the input while largely preserves the resulting accuracy.
Researcher Affiliation Academia Liyao Xianga, Hao Zhanga, Haotian Mab, Yifan Zhanga, Jie Rena, and Quanshi Zhanga a Shanghai Jiao Tong University, b South China University of Technology
Pseudocode No The paper does not contain structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about the release of source code or a direct link to a code repository.
Open Datasets Yes We revised a variety of classical DNNs to complex-valued DNNs and tested them on different datasets to demonstrate the broad applicability of our method. Without loss of generality, tasks include object classification and face attribute estimation, but do not exclude other tasks on DNNs. We have applied a total of two inversion attacks and four inference attacks to the complex-valued DNNs for testing the privacy performance. Datasets used: CIFAR-10, CIFAR-100, Celeb A, CUB-200.
Dataset Splits No The paper mentions 'training dataset' and 'validation' in a general context but does not provide specific details on the train/validation/test dataset splits (e.g., percentages or exact sample counts).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or solver names with version numbers).
Experiment Setup Yes Batch size of 128 and 100 are used respectively for Celeb A and CIFAR-100. We constructed complex-valued DNNs based on 8 classical DNNs in total, which included the Res Net-20/32/44/56/110 (He et al. (2016)), the Le Net (Le Cun et al. (1998)), the VGG-16 (Simonyan & Zisserman (2015)), and the Alex Net (Krizhevsky et al. (2012)). The encoder of the Le Net consisted of the first convolutional layer and the GAN, whereas its decoder only contained the softmax layer. All layers before the last 56 56 feature map of VGG-16 comprised the encoder. The decoder consisted of fully-connected layers and the softmax layer. For the Alex Net, the output of the first three convolutional layers was fed into GAN, and the decoder contained fully-connected layers and the softmax layer. In the processing modules of these DNNs, we set ck = Eij x ijk for neural activations in the k-th channel.