DDDM: A Brain-Inspired Framework for Robust Classification

Authors: Xiyuan Chen, Xingyu Li, Yi Zhou, Tianming Yang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on three types of datasets, including MNIST [Lecun et al., 1998] and CIFAR10 [Krizhevsky, 2009] for image classification, the Speech Commands [Warden, 2018] for audio classification, and the IMDB dataset [Maas et al., 2011] for text classification. We compare the networks performance with and without the DDDM against a variety of adversarial attacks, including both the white-box and the black-box ones. The experimental results show that the DDDM improves the robustness of the network, providing defense against all adversarial attacks tested on all three datasets.
Researcher Affiliation Academia 1Institute of Neuroscience, Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China 2University of Chinese Academy of Sciences, Beijing, China 3Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, China 4National Engineering Laboratory for Brain-inspired Intelligence Technology and Application, School of Information Science and Technology, University of Science and Technology of China, Hefei, China
Pseudocode No The paper describes the model components mathematically and textually but does not include any pseudocode or algorithm blocks.
Open Source Code Yes Implementation: https: //github.com/Xi Yuan68/DDDM.
Open Datasets Yes We conduct experiments on three types of datasets, including MNIST [Lecun et al., 1998] and CIFAR10 [Krizhevsky, 2009] for image classification, the Speech Commands [Warden, 2018] for audio classification, and the IMDB dataset [Maas et al., 2011] for text classification.
Dataset Splits No The paper mentions "model training and validation" for dropout rates 'a' and 'b', but it does not provide specific details on how the dataset was split into training, validation, or test sets (e.g., percentages or sample counts), nor does it reference standard splits for these datasets for training/validation. It only mentions the datasets themselves and parameters for the DDM.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments.
Software Dependencies No The paper does not specify version numbers for any software dependencies. While it implicitly uses PyTorch (common for deep learning papers), no version is mentioned.
Experiment Setup Yes For all the tasks, we run experiments over different combinations of dropout rates (a, b), both taking values from the set {0, 0.2, 0.4, 0.6, 0.8}. Therefore, twenty-five classifiers are tested in each case. In implementing DDM, we draw 100 predictions from each classifier on each generated adversarial example. Then, 10 trials of the length L = 25 are randomly sampled from those predictions. These trials are fed into the evidence accumulation mechanism with a decision threshold A = 0.99 for the final prediction. ... To implement the test-phase dropout classifier, we use the VGG16 architecture [Simonyan and Zisserman, 2015] without batch normalization. A dropout layer is added to each of the last six convolutional layers. ... Our Deep Speech2 model includes a Melspectrogram conversion layer, followed by one 1D convolutional layer, two LSTMs, and two fully-connected layers. Dropout is only applied once after the convolutional layer.