Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks
Authors: Alexander Levine, Soheil Feizi
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present empirical results evaluating the performance of proposed methods, DPA and SS-DPA, against poison attacks on MNIST, CIFAR-10, and GTSRB datasets. |
| Researcher Affiliation | Academia | Alexander Levine & Soheil Feizi Department of Computer Science University of Maryland College Park, MD 20742, USA {alevine0, sfeizi}@cs.umd.edu |
| Pseudocode | No | The paper describes algorithms conceptually and mathematically but does not include a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | Code is available at https://github.com/alevine0/DPA. |
| Open Datasets | Yes | We present empirical results evaluating the performance of proposed methods, DPA and SS-DPA, against poison attacks on MNIST, CIFAR-10, and GTSRB datasets. |
| Dataset Splits | No | The paper mentions using MNIST, CIFAR-10, and GTSRB datasets but does not provide explicit details on training, validation, and test splits (e.g., percentages, sample counts, or specific split files/methods for reproduction). |
| Hardware Specification | Yes | Training times are on a single GPU; note that many partitions can be trained in parallel. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2019)' as an implementation framework and specific methods like 'Rot Net (Gidaris et al., 2018)' and 'Sim CLR (Chen et al., 2020)' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For training the embeddings, we used a Res Net18 model with batch size of 512 for CIFAR-10 and 256 for GTSRB, initial learning rate of 0.5, cosine annealing, and temperature parameter of 0.5, and trained for 1000 epochs. For learning the linear ensemble classifiers, we used a batch size of 512, initial learning rate of 1.0, and trained for 100 epochs. |