Constructing Fast Network through Deconstruction of Convolution
Authors: Yunho Jeon, Junmo Kim
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate the performance of our proposed method, we conducted several experiments with classification benchmark datasets. |
| Researcher Affiliation | Academia | Yunho Jeon School of Electrical Engineering, KAIST jyh2986@kaist.ac.kr Junmo Kim School of Electrical Engineering, KAIST junmo.kim@kaist.ac.kr |
| Pseudocode | No | The paper describes methods through equations and text but does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ jyh2986/Active-Shift. |
| Open Datasets | Yes | We conducted experiments to verify the basic performance of ASL with the CIFAR-10/100 dataset [14] that contains 50k training and 10k test 32 32 images. To prove the generality of the proposed method, we conducted experiments with an Image Net 2012 classification task. |
| Dataset Splits | No | The paper states using '50k training and 10k test' images for CIFAR-10/100 but does not explicitly provide details for a validation split. |
| Hardware Specification | Yes | Time is measured using an Intel i7-5930K CPU with a single thread and averaged over 100 repetitions. Measured by Caffe [12] using an Intel i7-5930K CPU with a single thread and GTX Titan X (Maxwell). |
| Software Dependencies | No | The paper mentions using 'Caffe [12]' but does not specify a version number for the software dependency. |
| Experiment Setup | Yes | For ASL, the shift parameters are randomly initialized with uniform distribution between -1 and 1. We used a normalized gradient following ACU[11] with an initial learning rate of 1e-2. Input images are normalized for all experiments. |