Adversary for Social Good: Protecting Familial Privacy through Joint Adversarial Attacks

Authors: Chetan Kumar, Riazat Ryan, Ming Shao11304-11311

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on a popular visual social dataset have demonstrated that our defense strategy can significantly mitigate the impacts of family information leakage.
Researcher Affiliation Academia Chetan Kumar, Riazat Ryan, Ming Shao Department of Computer & Information Science University of Massachusetts Dartmouth, Dartmouth, MA, USA {ckumar, rryan2, mshao}@umassd.edu
Pseudocode Yes Algorithm 1: Procedure of Joint Adversarial Attack.
Open Source Code No The paper mentions using 'Sphere Net and GCN open implementation by (Kipf and Welling 2016; Liu et al. 2017)' but does not state that their own code for the described methodology is open-source or provide a link.
Open Datasets Yes In this study we have used Families In the Wild (FIW) dataset (Robinson et al. 2018).
Dataset Splits Yes Among 2758 nodes, we have used 502 nodes for training with graph, while the rest for validation and testing.
Hardware Specification Yes All the codes are implemented on Ubuntu16.04 system with i7-8700 (3.2 GHz), 16 GB memory and a nvidia GTX 1070 GPU card.
Software Dependencies No The paper mentions 'Py Torch library, Sphere Net and GCN open implementation' but does not provide specific version numbers for any of these software components.
Experiment Setup Yes First, we have preprocessed the FIW dataset by extracting the features of the images by using pre-trained Sphere Net model, and the dimension of node features is thus reduced to 512. ϵ = Δϵ = 0.00025, e = Δe = 0.05|E| from Algorithm 1.