Efficient and Effective Augmentation Strategy for Adversarial Training
Authors: Sravanti Addepalli, Samyak Jain, Venkatesh Babu R
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We obtain improved robustness and large gains in standard accuracy on multiple datasets (CIFAR-10, CIFAR-100, Image Nette) and model architectures (RN-18, WRN-34-10). |
| Researcher Affiliation | Academia | Sravanti Addepalli Samyak Jain R.Venkatesh Babu Video Analytics Lab, Indian Institute of Science, Bangalore Indian Institute of Technology (BHU) Varanasi |
| Pseudocode | Yes | (Ref: Algorithm-1 in the Appendix) |
| Open Source Code | Yes | The code for implementing DAJAT is available here: https://github.com/val-iisc/DAJAT. |
| Open Datasets | Yes | We obtain improved robustness and large gains in standard accuracy on multiple datasets (CIFAR-10, CIFAR-100, Image Nette) and model architectures (RN-18, WRN-34-10). |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] Appendix-F.2 and Read Me file at https://github.com/val-iisc/DAJAT |
| Hardware Specification | Yes | Training time per epoch is reported by running each algorithm across 2 V100 GPUs. |
| Software Dependencies | No | The paper mentions software like 'PyTorch' in its references (e.g., [31]) but does not provide specific version numbers for any ancillary software or libraries used in its experiments within the main text or supplementary sections accessible without external links. |
| Experiment Setup | Yes | We train all models for 110 epochs unless specified otherwise. |