Optimal Strategies Against Generative Attacks

Authors: Roy Mor, Erez Peterfreund, Matan Gavish, Amir Globerson

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the method empirically, showing that it outperforms existing methods in terms of resilience to generative attacks, and that it can be used effectively for data-augmentation in the few-shot learning setting.
Researcher Affiliation Academia Roy Mor Tel Aviv University Tel Aviv, Israel Erez Peterfreund The Hebrew University of Jerusalem Jerusalem, Israel Matan Gavish The Hebrew University of Jerusalem Jerusalem, Israel Amir Globerson Tel Aviv University Tel Aviv, Israel
Pseudocode No The paper describes mathematical proofs and theoretical models but does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Our implementation is available at https://github.com/ roymor1/Optimal Strategies Against Generative Attacks.
Open Datasets Yes We next evaluate GIM in an authentication setting on two datasets: the Vox Celeb2 faces dataset (Nagrani et al., 2017; Chung & Zisserman, 2018), and the Omniglot handwritten character dataset (Lake et al., 2015).
Dataset Splits Yes We used the original split of 5994 identities for training and 118 for test. For Omniglot, We use the splits and augmentations suggested by Vinyals et al. (2016) and used by Snell et al. (2017).
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU, CPU models, memory) used to run its experiments.
Software Dependencies No The paper mentions using 'Adam optimizer' and refers to specific loss functions and architectural inspirations from other papers, but it does not provide specific version numbers for software dependencies or libraries (e.g., PyTorch version, TensorFlow version, Adam implementation version).
Experiment Setup Yes Each experiment is trained for 200K iterations with a batch size of 4000 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 10 4. The experiments on Omniglot were trained for 520k iterations with batch size 128 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 10 6 for D, 10 5 for G, and 10 7 for MLPG... The experiments on Voxceleb2 were trained for 250k iterations with batch size 64 using the Adam optimizer (Kingma & Ba, 2015) with learning rate 10 4 for both D and G and 10 6 for MLPG. The regularization parameter was set to 10.