MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Authors: Jinyuan Jia, Wenjie Qu, Neil Gong
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we evaluate our Multi Guard on VOC 2007, MS-COCO, and NUS-WIDE benchmark datasets. 4 Evaluation 4.1 Experimental Setup |
| Researcher Affiliation | Academia | Jinyuan Jia University of Illinois Urbana-Champaign jinyuan@illinois.edu; Wenjie Qu Huazhong University of Science and Technology wen_jie_qu@outlook.com; Neil Zhenqiang Gong Duke University neil.gong@duke.edu |
| Pseudocode | Yes | Complete algorithm: Algorithm 1 in supplementary materials shows our complete algorithm to compute the certified intersection size for an input x. |
| Open Source Code | Yes | Our code is available at: https://github.com/quwenjie/Multi Guard |
| Open Datasets | Yes | VOC 2007 [15]: Pascal Visual Object Classes Challenge (VOC 2007) dataset [15]... MS-COCO [28]: Microsoft-COCO (MS-COCO) [28] dataset... NUS-WIDE [9]: NUS-WIDE dataset [9]... We adopt the version released by [2] |
| Dataset Splits | Yes | Following previous work [43], we split the dataset into 5,011 training images and 4,952 testing images. MS-COCO [28] dataset contains 82,081 training images, 40,504 validation images, and 40,775 testing images from 80 objects. NUS-WIDE [9]... contains 154,000 training images and 66,000 testing images. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'ASL2' (with a GitHub link) but does not specify version numbers for any software dependencies or libraries. |
| Experiment Setup | Yes | Following [2], we set training hyperameters γ+ = 0, γ = 4, and m = 0.05. We train the classifier using Adam optimizer, using learning rate 10 3 and batch size 32. |