Adversarial Learning of Privacy-Preserving and Task-Oriented Representations
Authors: Taihong Xiao, Yi-Hsuan Tsai, Kihyuk Sohn, Manmohan Chandraker, Ming-Hsuan Yang12434-12441
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the proposed method on face attribute prediction, showing that our method allows protecting visual privacy with a small decrease in utility performance. In addition, we show the utilityprivacy trade-off with different choices of hyperparameter for negative perceptual distance loss at training, allowing service providers to determine the right level of privacy-protection with a certain utility performance. Moreover, we provide an extensive study with different selections of features, tasks, and the data to further analyze their influence on privacy protection. |
| Researcher Affiliation | Collaboration | Taihong Xiao,1 Yi-Hsuan Tsai,2 Kihyuk Sohn,2 Manmohan Chandraker,2,3 Ming-Hsuan Yang1 1University of California, Merced 2NEC Laboratories America 3University of California, San Diego |
| Pseudocode | No | The paper includes a conceptual diagram (Figure 2) but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We use the widely-used Celeb A (Liu et al. 2015) and MSCeleb-1M (Guo et al. 2016) datasets for experiments. |
| Dataset Splits | Yes | In most experiments, we split the Celeb A dataset into three parts, X1 with 160k images, X2 with 40k images, and the test set T with the rest. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., specific GPU or CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using a 'Res Net-50 model' but does not provide specific version numbers for software libraries, frameworks, or environments (e.g., PyTorch version, TensorFlow version, Python version). |
| Experiment Setup | Yes | Table 2: Results with different λ2 in the training stage. Other hyperparameters are fixed: λ1 = 1, μ1 = 0, μ2 = 1. We use the Res Net-50 model (He et al. 2016) for feature representation and two fully connected layers for latent classifier f. The Dec uses up-sampling layers to decode features to pixel images. |