CoPur: Certifiably Robust Collaborative Inference via Feature Purification

Authors: Jing Liu, Chulin Xie, Sanmi Koyejo, Bo Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Extra Sensory and NUS-WIDE datasets show that Co Pur significantly outperforms existing defenses in terms of robustness against targeted and untargeted adversarial attacks.
Researcher Affiliation Collaboration Jing Liu Department of Computer Science University of Illinois at Urbana Champaign jil292@illinois.edu Chulin Xie Department of Computer Science University of Illinois at Urbana Champaign chulinx2@illinois.edu Oluwasanmi O Koyejo University of Illinois at Urbana Champaign & Stanford University & Google Research sanmi@stanford.edu Bo Li Department of Computer Science University of Illinois at Urbana Champaign lbo@illinois.edu
Pseudocode No The paper does not contain a clearly labeled pseudocode or algorithm block for its own method.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Please see Section C.1 of supplemental material.
Open Datasets Yes Extra Sensory contains the measurements from diverse sensors of smart phone and smart watch. ... Extra Sensory dataset: http://extrasensory.ucsd.edu. License: CC BY-NC-SA 4.0. In NUS-WIDE, each sample has 634 image features, 1000 text features, and 5 different labels... https://lms.comp.nus.edu.sg/wpcontent/uploads/2019/research/nuswide/NUS-WIDE.html.
Dataset Splits No The paper specifies training and testing splits, but no explicit mention of a validation split in the provided text for either dataset. For Extra Sensory: "We use the first 1721 samples from a user for training, and the rest 465 samples for testing". For NUS-WIDE: "We use 60000 samples for training, 1000 samples for testing targeted attacks, and 10000 samples for testing untargeted attacks."
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or cloud instance specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the experiments.
Experiment Setup Yes We set the PGD attack with a learning rate of 0.5 and 30 iterations so that it can successfully attack the unsecured model. While in the NUS-WIDE dataset, the target label is set to be grass . We set the PGD attack with a learning rate of 0.1 and 50 iterations.