Constructing Orthogonal Convolutions in an Explicit Manner
Authors: Tan Yu, Jun Li, YUNFENG CAI, Ping Li
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on CIFAR-10 and CIFAR-100 demonstrate that the proposed ECO convolution is faster than SOC in evaluation while leading to competitive standard and certified robust accuracies. |
| Researcher Affiliation | Industry | Tan Yu, Jun Li, Yunfeng Cai, Ping Li Cognitive Computing Lab Baidu Research 10900 NE 8th St. Bellevue, Washington 98004, USA {tanyu01,lijun12,caiyunfeng,liping11}@baidu.com |
| Pseudocode | Yes | Algorithm 1: The proposed efficient orthogonal convolution. ... Algorithm 2: Constructing the mapping matrix I |
| Open Source Code | No | The paper states, "It has been implemented in the Paddle Paddle deep learning platform https://www.paddlepaddle.org.cn." However, this refers to the platform used for implementation, not an explicit release of the authors' own source code for the method described in the paper. |
| Open Datasets | Yes | Experiments on CIFAR-10 and CIFAR-100 demonstrate that the proposed ECO convolution is faster than SOC in evaluation while leading to competitive standard and certified robust accuracies. |
| Dataset Splits | No | The paper mentions training and testing on CIFAR-10 and CIFAR-100 but does not explicitly provide details about a validation dataset split (e.g., percentages or counts) or its use. |
| Hardware Specification | Yes | The training is conducted on a single NVIDIA V100 GPU with 32G memory. |
| Software Dependencies | No | The paper mentions "Paddle Paddle deep learning platform" but does not specify a version number for this platform or any other software dependencies. |
| Experiment Setup | Yes | The training takes 200 epochs. The initial learning rate is 0.1 when the number of convolution layers is larger than 25 and 0.05 otherwise, and it decreases by a factor of 0.1 at the 50th and the 150th epoch. We set the weight decay as 5e-4. ... By default, we set T = 5 in training and T = 10 in testing. |