Alleviating Privacy Attacks via Causal Learning

Authors: Shruti Tople, Amit Sharma, Aditya Nori

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate on two types of datasets: 1) Four datasets generated from known Bayesian Networks and 2) Colored images of digits from the MNIST dataset. Code is available at https://github.com/microsoft/robustdg. ... 4.1 Results for Bayesian Network Datasets ... 4.2 Results for Colored MNIST Dataset
Researcher Affiliation Industry 1Microsoft Research. Correspondence to: Shruti Tople <shruti.tople@microsoft.com>, Amit Sharma <amshar@microsoft.com>.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/microsoft/robustdg.
Open Datasets Yes We select 4 Bayesian network datasets Child, Sachs, Alarm and Water that range from 178 10k parameters (Table 1)2. ... 2www.bnlearn.com/bnrepository. ... For this, we consider colored MNIST images used in a recent work by Arjovsky et al. (2019). ... 3http://yann.lecun.com/exdb/mnist/ 4https://github.com/facebookresearch/Invariant Risk Minimization.
Dataset Splits No The paper states: 'We sample data using the causal structure and probabilities from the Bayesian network, and use a 60 : 40% split for train-test datasets.' However, it does not explicitly provide information about a separate validation split or its size/percentage for either dataset.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies Yes To train the causal model, we use the bnlearn library in R language... To train the DNN model and the attacker model, we build custom estimators in Python using Tensorflow v1.2.
Experiment Setup Yes The DNN model is a multilayer perceptron (MLP) with 3 hidden layers of 128, 512 and 128 nodes respectively. The learning rate is set to 0.0001 and the model is trained for 10000 steps. The attacker model has 2 hidden layers with 5 nodes each, a learning rate of 0.001, and is trained for 5000 steps. Both models use Adam optimizer, Re LU for the activation function, and cross entropy as the loss function.