Towards an Effective Orthogonal Dictionary Convolution Strategy

Authors: Yishi Li, Kunran Xu, Rui Lai, Lin Gu1473-1481

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate it on a variety of CNNs in small-scale (CIFAR), large-scale (Image Net) and fine-grained (CUB-2002011) image classification tasks, respectively. The experimental results show that our method achieve a stable and superior improvement.
Researcher Affiliation Academia 1 School of Microelectronics, Xidian University, Xi an 710071, China 2 Chongqing Innovation Research Institute of Integrated Cirtuits, Xidian University, Chongqing 400031, China 3 RIKEN AIP, Tokyo 103-0027, Japan 4 The University of Tokyo, Tokyo, Japan
Pseudocode No The paper describes the proposed strategy and includes mathematical formulas but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access information (specific link, explicit statement of code release, or mention of code in supplementary materials) for the described methodology.
Open Datasets Yes We evaluate it on a variety of CNNs in small-scale (CIFAR), large-scale (Image Net) and fine-grained (CUB-2002011) image classification tasks, respectively.
Dataset Splits Yes CIFAR-10 and CIFAR-100 consists of 50k training images and 10k validation images, divided into 10 and 100 classes respectively.
Hardware Specification No The paper mentions training 'on one GPU' or 'on 8 GPUs' but does not provide specific hardware details such as GPU models (e.g., NVIDIA A100) or CPU specifications.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9, CUDA 11.1).
Experiment Setup Yes We use the standard SGD optimizer to train our models with momentum of 0.9 and the weight decay is 4e-5. ... These models are trained with a mini-batch size of 128 on one GPU.