LCANets: Lateral Competition Improves Robustness Against Corruption and Attack
Authors: Michael Teti, Garrett Kenyon, Ben Migliori, Juston Moore
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments on action and image recognition datasets using common image corruptions and adversarial attacks to test the robustness of LCANets. For baselines, we compare against standard Res Net models, adversarially-trained Res Net models, and VOne Res Net models. |
| Researcher Affiliation | Academia | 1Los Alamos National Laboratory, Los Alamos, NM, USA. |
| Pseudocode | No | The paper describes the algorithms used (e.g., LCA) but does not present them in structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | All models were implemented in Py Torch 1.10.1... They were then fine-tuned and tested on the UCF-101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011) datasets using the standard, train, validation, and test splits. ... We perform experiments on action and image recognition datasets ... as well as the CIFAR-10 image recognition dataset. |
| Dataset Splits | Yes | They were then fine-tuned and tested on the UCF-101 (Soomro et al., 2012) and HMDB-51 (Kuehne et al., 2011) datasets using the standard, train, validation, and test splits. |
| Hardware Specification | Yes | All models were implemented in Py Torch 1.10.1 on a high-performance computing node with eight NVIDIA Ge Force RTX 2080 Ti GPUs, 80 CPU cores, and 754GB of memory. |
| Software Dependencies | Yes | All models were implemented in Py Torch 1.10.1 on a high-performance computing node with eight NVIDIA Ge Force RTX 2080 Ti GPUs, 80 CPU cores, and 754GB of memory. |
| Experiment Setup | Yes | All training hyperparameters, such as the batch size and learning rate were set to the values used in (Kataoka et al., 2020). ... each model for 60 epochs on the standard training set with a batch size of 128, the one cycle learning rate scheduler with max learning rate of 0.12 (Dong et al., 2015), and horizontal flipping and random cropping augmentation. |