Feature Prioritization and Regularization Improve Standard Accuracy and Adversarial Robustness
Authors: Chihuang Liu, Joseph JaJa
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate our model on the MNIST, CIFAR10 and CIFAR-100 datasets, and present empirical justification to attention module and some quantitative and qualitative results. |
| Researcher Affiliation | Academia | Chihuang Liu and Joseph Ja Ja Institute for Advanced Computer Studies and Department of Electrical and Computer Engineering University of Maryland, College Park, MD 20742, USA {chliu, josephj}@umd.edu |
| Pseudocode | No | The paper provides mathematical formulations and descriptions of its model, but no explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that its source code is open-sourced or publicly available. |
| Open Datasets | Yes | Our model is evaluated on the MNIST, CIFAR-10, and CIFAR-100 datasets |
| Dataset Splits | No | The paper mentions training and testing phases and refers to the 'test set' but does not specify detailed train/validation/test splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU or GPU models) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | We use a CNN with two convolutional layers with 32 and 64 filters respectively, followed by two fully connected layers of size 1024 and 10. The network is trained with 40-step PGD adversary with a step size of 0.01 and l bound of = 0.3. The settings are the same as in Madry et al. [2017]. |