Poisoning Generative Replay in Continual Learning to Promote Forgetting

Authors: Siteng Kang, Zhan Shi, Xinhua Zhang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We next experiment on CIAP to verify: i) it attains the two objectives Oeff and Oste; ii) the trigger-discarding property introduced in 3.3 holds true for commonly used generative models; iii) CIAP remains effective under strong defenders (Orob). The code is available at Online Supplementary. split-MNIST We separated the entire dataset of MNIST into five tasks, each consisting of images from two disjoint classes in MNIST the first task includes classes 0 and 1; the second task includes 2 and 3; and so on. The victim model was trained for 100 epoch in each task. Figure 1a shows the baseline result without a replayer, where the blue line represents the test accuracy of the first task, orange line for the second, etc. Figure 1c shows the result after our attack CIAP is enacted. The test accuracy of each current task can still achieve nearly 100%, corroborating the achievement of objective Oste.
Researcher Affiliation Academia Siteng Kang 1 Zhan Shi 1 Xinhua Zhang 1 1Department of Computer Science, University of Illinois Chicago. Correspondence to: Siteng Kang and Xinhua Zhang <{skang98,zhangx}@uic.edu>.
Pseudocode Yes Algorithm 1 DGR to combat forgetting (continual learning) [...] Algorithm 2 Input Aware Backdoor [...] Algorithm 3 Input Aware Backdoor Obj [...] Algorithm 4 Operation of the user, attacker, and defender during task t
Open Source Code Yes The code is available at Online Supplementary. Online Supplementary. Supplementary material including code (no tracking). https: //www.dropbox.com/sh/mku8oln1t7ngscl/ AABVPSw ZBlx41Gt QYRy YVRgha?dl=0.
Open Datasets Yes We tested CIAP on five datasets: split-MNIST (Ciresan et al., 2011), split-CIFAR-10 (Krizhevsky & Hinton, 2009), Fashion MNIST-MNIST (Xiao et al., 2017), permuted MNIST (Goodfellow et al., 2014), and split-EMNIST (Cohen et al., 2017).
Dataset Splits No The paper describes training models and evaluating on test sets, but does not explicitly provide details about a separate validation dataset split (e.g., percentages or counts for validation data) used during training or hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory).
Software Dependencies No The paper mentions using 'c WGAN with gradient penalty as the replayer' and 'c VAE', 'Spinal VGG', and 'Res Net' as classifiers, but does not provide specific version numbers for these software components or any other libraries/frameworks used.
Experiment Setup Yes The victim model was trained for 100 epoch in each task. The poison ratio ρb = 0.25, and the cross ratio ρc = 0.15. Each trigger was allowed to change 5% of the image s pixels.