EtinyNet: Extremely Tiny Network for TinyML

Authors: Kunran Xu, Yishi Li, Huawei Zhang, Rui Lai, Lin Gu4628-4636

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on Image Net using the MXNet (Chen et al. 2015) toolbox. During training, we use the standard SGD optimizer to train our models with both decay and momentum of 0.9 and the weight decay is 1e 4. We use the cosine learning schedule with an initial learning rate of 0.1 and the weight initialization introduced by (He et al. 2015). The batch size is set to 1024 and 8 GPUs are used for training. We train all the models for 300 epochs. The input image is randomly cropped to 224 224 and randomly flipped horizontally, and is kept as 8 bit signed integer with no standardization applied.
Researcher Affiliation Academia Kunran Xu1,2, Yishi Li1,2, Huawei Zhang1,2, Rui Lai1,2 , Lin Gu3,4 1School of Microelectronics, Xidian University, Xi an 710071, China 2Chongqing Innovation Research Institute of Integrated Cirtuits, Xidian University, Chongqing 400031, China. 3RIKEN AIP, Tokyo103-0027, Japan 4The University of Tokyo, Japan
Pseudocode No The paper includes architectural diagrams (Figure 2) and mathematical formulas (Equations 1-7) but does not present any pseudocode or algorithm blocks.
Open Source Code Yes The code and demo are in https://github.com/aztc/Etiny Net
Open Datasets Yes Image Net-1000 (Deng et al. 2009) is the most convincing benchmark which consists of 1,281,167 images belonging 1000 categories. ... The object detection performance of our Etiny Net-SSD and other state-of-the-art methods are benchmarked on Pascal VOC (Everingham et al. 2009) dataset using STM32H743 (ARM Cortex-M7 CPU with 512KB SRAM and 2MB Flash) MCU.
Dataset Splits No The paper mentions using Image Net-1000 and Pascal VOC datasets but does not explicitly state the specific training/validation/test splits (e.g., percentages or sample counts) used for these datasets within the text.
Hardware Specification Yes The batch size is set to 1024 and 8 GPUs are used for training. ... we deploy Etiny Net-1.0 for Image Net classification on STM32F412 (ARM Cortex-M4 CPU with 256KB SRAM and 1MB Flash) and STM32F746 MCUs. ... We performe experiments on the Xilinx compact FPGA Artix7 XC7A100T
Software Dependencies No We conduct extensive experiments on Image Net using the MXNet (Chen et al. 2015) toolbox. While a software is mentioned, no version numbers are provided for MXNet or other dependencies.
Experiment Setup Yes During training, we use the standard SGD optimizer to train our models with both decay and momentum of 0.9 and the weight decay is 1e 4. We use the cosine learning schedule with an initial learning rate of 0.1 and the weight initialization introduced by (He et al. 2015). The batch size is set to 1024 and 8 GPUs are used for training. We train all the models for 300 epochs. The input image is randomly cropped to 224 224 and randomly flipped horizontally, and is kept as 8 bit signed integer with no standardization applied. ... The initial learning rate and training epochs are adjusted to 0.01 and 40. We initialize λ as 2.