RTN: Reparameterized Ternary Network

Authors: Yuhang Li, Xin Dong, Sai Qian Zhang, Haoli Bai, Yuanpeng Chen, Wei Wang4780-4787

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on Res Net18 for Image Net demonstrate that the proposed RTN finds a much better efficiency between bitwidth and accuracy and achieves up to 26.76% relative accuracy improvement compared with state-of-the-art methods. Moreover, we validate the proposed computation pattern on Field Programmable Gate Arrays (FPGA), and it brings 46.46 and 89.17 savings on power and area compared with the full precision convolution.
Researcher Affiliation Academia Yuhang Li,1 Xin Dong,2 Sai Qian Zhang,2 Haoli Bai,3 Yuanpeng Chen,1 Wei Wang4 1University of Electronic Science and Technology of China, 2Harvard University 3The Chinese University of Hong Kong, 4National University of Singapore
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions using 'Py Torch implementations of XNOR-Net' and provides a link to 'https://github.com/jiecaoyu/XNOR-Net-Py Torch', which is for a third-party implementation, not the authors' own source code for RTN.
Open Datasets Yes Experiments In this section, we first present some empirical evaluations of the reparameterized ternary network (RTN) on two real-world datasets: Image Net-ILSVRC2012 (Russakovsky et al. 2015) and CIFAR-10 (Krizhevsky, Hinton, and others 2009)
Dataset Splits No The paper mentions 'The validation error on Image Net is plotted in Figure 5' and uses 'train' in its implementation details, but it does not provide specific percentages, sample counts, or detailed methodology for dataset splits (e.g., train/validation/test percentages).
Hardware Specification Yes We synthesize our design with Xilinx Vivado Design Suite (viv ) and use Xilinx VC707 FPGA evaluation board for power measurement.
Software Dependencies No The paper mentions 'Xilinx Vivado Design Suite' and 'Synopsys Design Compiler' as software used for design synthesis and comparison, but it does not provide specific version numbers for these tools.
Experiment Setup Yes We follow the implementation setting of other extremely low-bit quantization networks (Rastegari et al. 2016) and do not quantize the weights and activation in the first and the last layers. Initialization could be vitally important for quantization neural networks. We first train a full-precision model from scratch and initialize the RTN by minimizing the Euclidean distance between quantized and full precision weights like TWN (Li, Zhang, and Liu 2016). For example, the initial γ is set to E|A|>0.5(|A|) and β is set to 0.