Learning Efficient Object Detection Models with Knowledge Distillation
Authors: Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, Manmohan Chandraker
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct comprehensive empirical evaluation with different distillation configurations over multiple datasets including PASCAL, KITTI, ILSVRC and MS-COCO. |
| Researcher Affiliation | Collaboration | 1NEC Labs America 2University of Missouri 3University of California, San Diego |
| Pseudocode | No | The paper does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | Datasets We evaluate our method on several commonly used public detection datasets, namely, KITTI [12], PASCAL VOC 2007 [11], MS COCO [6] and Image Net DET benchmark (ILSVRC 2014) [35]. |
| Dataset Splits | Yes | Since KITTI and ILSVRC 2014 do not provide ground-truth annotation for test sets, we use the training/validation split introduced by [39] and [24] for analysis. |
| Hardware Specification | No | The paper mentions running experiments "on GPU" but does not specify any particular GPU models, CPU models, memory details, or other specific hardware specifications. |
| Software Dependencies | No | The paper does not explicitly list software dependencies with their specific version numbers. |
| Experiment Setup | Yes | We fix them [λ and γ] to be 1 and 0.5, respectively, throughout the experiments. For example, we use w0 = 1.5 for the background class and wi = 1 for all the others in experiments on the PASCAL dataset. ... ν is a weight parameter (set as 0.5 in our experiments). |