Latency-aware Spatial-wise Dynamic Networks
Authors: Yizeng Han, Zhihang Yuan, Yifan Pu, Chenhao Xue, Shiji Song, Guangyu Sun, Gao Huang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on image classification, object detection and instance segmentation demonstrate that the proposed framework significantly improves the practical inference efficiency of deep networks. |
| Researcher Affiliation | Academia | Yizeng Han1 Zhihang Yuan2 Yifan Pu1 Chenhao Xue2 Shiji Song1 Guangyu Sun2 Gao Huang1 1 Department of Automation, BNRist, Tsinghua University, Beijing, China 2 School of Electronics Engineering and Computer Science, Peking University, Beijing, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/Leap Lab THU/LASNet. |
| Open Datasets | Yes | The image classification experiments are conducted on the Image Net [4] dataset. ... We further evaluate our LASNet on the COCO [22] object detection task. |
| Dataset Splits | No | The paper mentions evaluating on the 'Image Net validation set' and conducting experiments on the 'Image Net [4] dataset' and 'COCO [22] object detection task', which implies standard splits for these public datasets are used. However, it does not explicitly provide specific percentages or sample counts for training/validation/test splits within the main text. |
| Hardware Specification | Yes | Various types of hardware platforms are tested, including a server GPU (Tesla V100), a desktop GPU (GTX1080) and edge devices (e.g., Nvidia Nano and Jetson TX2). |
| Software Dependencies | No | The paper mentions using 'torchvision pre-trained models' but does not specify software dependencies like PyTorch, CUDA, or other libraries with version numbers. |
| Experiment Setup | Yes | We fix α = 10, β = 0.5 and T = 4.0 for all dynamic models. More details are provided in Appendix B. ... finetune the whole network for 100 epochs ... finetuned on COCO with the standard setting for 12 epochs |