Skew Orthogonal Convolutions
Authors: Sahil Singla, Soheil Feizi
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on CIFAR10 and CIFAR-100 show that SOC allows us to train provably Lipschitz, large convolutional neural networks significantly faster than prior works while achieving significant improvements for both standard and certified robust accuracies. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Maryland, College Park. Correspondence to: Sahil Singla <ssingla@umd.edu>. |
| Pseudocode | Yes | Algorithm 1 Skew Orthogonal Convolution |
| Open Source Code | Yes | Code is available at https://github.com/singlasahil14/SOC. |
| Open Datasets | Yes | Our experiments on CIFAR10 and CIFAR-100 show that SOC allows us to train provably Lipschitz, large convolutional neural networks significantly faster than prior works while achieving significant improvements for both standard and certified robust accuracies. |
| Dataset Splits | No | The paper mentions training and evaluating on CIFAR-10 and CIFAR-100 but does not provide specific train/validation/test dataset splits or methodologies like k-fold cross-validation. |
| Hardware Specification | Yes | All experiments were performed using 1 NVIDIA Ge Force RTX 2080 Ti GPU. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | All networks were trained for 200 epochs with an initial learning rate 0.1, dropped by a factor of 0.1 after 50 and 150 epochs. We use no weight decay for training with BCOP convolution as it significantly reduces its performance. For training with standard convolution and SOC, we use a weight decay of 10-4. |