URNet: User-Resizable Residual Networks with Conditional Gating Module
Authors: Sangho Lee, Simyung Chang, Nojun Kwak4569-4576
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the experiments on Image Net, URNet based on Res Net-101 maintains the accuracy of the baseline even when resizing it to approximately 80% of the original network, and demonstrates only about 1% accuracy degradation when using about 65% of the computation. |
| Researcher Affiliation | Collaboration | Sangho Lee, 1 Simyung Chang, 12 Nojun Kwak 1 1Seoul National University, Seoul, Korea 2Samsung Electronics, Suwon, Korea |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement or link regarding the release of source code for the methodology. |
| Open Datasets | Yes | In the following experiments, we have trained and evaluated our method on CIFAR-10, CIFAR-100 (Krizhevsky 2009) and Image Net (Deng et al. 2009) datasets with top-1 accuracy. |
| Dataset Splits | No | The paper mentions using CIFAR-10, CIFAR-100, and Image Net datasets but does not explicitly provide percentages or counts for training, validation, and test splits. While standard splits are implied for these common datasets, they are not explicitly stated. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as particular GPU or CPU models, memory, or cloud instances. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers, such as Python libraries or frameworks. |
| Experiment Setup | Yes | We train CGM only for 100 epochs on CIFAR datasets and 5 epochs on Image Net. Then, we train CGM and the base network jointly for 400 additional epochs on CIFAR and 15 epochs on Image Net. The learning rate is adjusted from 10 3 to 10 5. ... The channel reduction rate r of CGM (see Figure 3) as 2 for CIFAR datasets and 16 for Image Net. ... For those experiments we have set β in equation (5) as 2.0. During training, the scale parameter S has been uniformly sampled in the range of [0.2, 1.0], for every iteration. |