Universal Multi-Party Poisoning Attacks

Authors: Saeed Mahloujifar, Mohammad Mahmoody, Ameer Mohammed

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We prove that for any bad property B of the final trained hypothesis h (e.g., h failing on a particular test example or having large risk) that has an arbitrarily small constant probability of happening without the attack, there always is a (k, p)-poisoning attack that increases the probability of B from µ to by µ1 p k/m = µ+Ω(p k/m).
Researcher Affiliation Academia 1University of Virginia. Supported by NSF CAREER award CCF-1350939, and University of Virginia s SEAS Research Innovation Award. {saeed, mohammad}@virginia.edu 2University of Kuwait. ameer.mohammed@ku.edu.kw.
Pseudocode No The paper describes steps for 'Rejection sampling attack' and 'Construction 3.7 (Rejection-sampling tampering)' using numbered lists, but these are not explicitly labeled as 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not provide any statement about releasing source code for the methodology described, nor does it include any links to code repositories.
Open Datasets No The paper is theoretical and does not conduct experiments on specific datasets. Therefore, it does not mention specific publicly available datasets or provide access information for any dataset used for training.
Dataset Splits No The paper is theoretical and does not conduct experiments. Therefore, it does not provide details on validation dataset splits or cross-validation setups.
Hardware Specification No The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any experimental implementations. Therefore, no specific software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and focuses on proving the power of attacks rather than empirical experimentation. Therefore, it does not provide details on experimental setup such as hyperparameters or training configurations.