Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Authors: Shijie Liu, Andrew C. Cullen, Paul Montague, Sarah M. Erfani, Benjamin I. P. Rubinstein
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our defence method achieves more than double the number of poisoned examples compared to existing certified approaches as demonstrated by experiments on MNIST, Fashion-MNIST and CIFAR-10. To verify the effectiveness of our proposed pointwise-certified defence, we conducted experiments across MNIST, Fashion-MNIST, and CIFAR-10 for varying levels of added noise σ. |
| Researcher Affiliation | Collaboration | 1School of Computing and Information Systems, University of Melbourne, Melbourne, Australia 2Defence Science and Technology Group, Adelaide, Australia |
| Pseudocode | Yes | Algorithm 1: Certifiably Robust Differentially Private Defence Algorithm |
| Open Source Code | No | The paper does not contain an explicit statement about releasing the source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | To verify the effectiveness of our proposed pointwise-certified defence, we conducted experiments across MNIST, Fashion-MNIST, and CIFAR-10 for varying levels of added noise σ. |
| Dataset Splits | No | The paper specifies training datasets (MNIST, Fashion-MNIST, CIFAR-10) and mentions a 'testing dataset De' but does not explicitly detail the training/test/validation split percentages or methodology for a validation set. |
| Hardware Specification | Yes | All experiments were conducted in Pytorch using a single NVIDIA RTX 2080 Ti GPU with 11 GB of GPU RAM. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify a version number or list other software dependencies with their versions. |
| Experiment Setup | Yes | Across all experiments adjust the sample ratio q to have a batch size of 128, with training conducted using ADAM with a learning rate of 0.01 optimising the Cross-Entropy loss. The clip size C is fine-tuned for each experiment (around 1.0 on MNIST, 25.0 on CIFAR-10). In each case, uncertainties were estimated for a confidence interval suitable for η = 0.001. |