Adaptive Verifiable Training Using Pairwise Class Similarity

Authors: Shiqi Wang, Kevin Eykholt, Taesung Lee, Jiyong Jang, Ian Molloy10201-10209

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on Fashion-MNIST and CIFAR10 demonstrate that by prioritizing the robustness between the most dissimilar groups, we improve clean performance by up to 9.63% and 30.89% respectively. Furthermore, on CIFAR100, our approach reduces the clean error rate by 26.32%.
Researcher Affiliation Collaboration Shiqi Wang1, Kevin Eykholt2, Taesung Lee2, Jiyong Jang2, Ian Molloy2 1 Columbia University 2 IBM Research
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described in this paper.
Open Datasets Yes We evaluated our approach using the Fashion MNIST (F-MNIST), CIFAR10, and CIFAR100 datasets. ... Image Net (Krizhevsky, Sutskever, and Hinton 2012)
Dataset Splits No The paper mentions using a 'test set' but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software like CROWN-IBP and Auto LIRPA but does not provide specific version numbers for these or any other ancillary software components.
Experiment Setup No The paper states that 'The training hyper-parameters and training schedules were the same as the ones used in Xu et al. (2020a)', deferring the specific details to another publication rather than providing them explicitly in the main text.