Designing by Training: Acceleration Neural Network for Fast High-Dimensional Convolution
Authors: Longquan Dai, Liang Tang, Yuan Xie, Jinhui Tang
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments Acc Net is the first neural network producing fast convolution algorithms. To reveal its advantages, three experiments are conducted: 1, we compare our Acc Net designed acceleration method to the handmade bilateral grid and permutohedral lattice acceleration methods; 2, we provide a new neural network to automatically design fast algorithm and compare it to Acc Net; 3, we employ Acc Net to design new acceleration algorithms for non-Gaussian convolution and demonstrate their applications. |
| Researcher Affiliation | Collaboration | Longquan Dai School of Computer Science and Engineering Nanjing University of Science and Technology dailongquan@njust.edu.cn Liang Tang CASA Environmental Technology Co., Ltd CASA EM&EW IOT Research Center tangl@casaet.com Yuan Xie Institute of Automation Chinese Academy of Sciences yuan.xie@ia.ac.cn Jinhui Tang School of Computer Science and Engineering Nanjing University of Science and Technology jinhuitang@njust.edu.cn |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | No | The paper references datasets like those used in [Zbontar and Le Cun, 2015] for stereo matching and Krähenbühl and Koltun for CRFs, but does not provide concrete access information (e.g., specific link, DOI, repository name, or formal citation with authors/year for dataset access) for a publicly available or open dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment for its proposed method. |
| Experiment Setup | Yes | In the following experiments, the blurring layer of Acc Net is composed by two cascaded g CP layers and the activation function is g(a, b) = max(ab, 0). Table 1: Filtering accuracy comparison for the bilateral grid acceleration method (BG), the permutohedral lattice acceleration method (PL) and our Acc Net, where the sampling period of splatting is 3, the radius of blurring is 1 and the radius of original convolution is 5. Table 2: Two acceleration neural networks (CNN and Acc Net) comparisons. The bandwidth of target Gaussian kernel is 5 and the underlying lattice is the bilateral grid. |