Deformable Kernels: Adapting Effective Receptive Fields for Object Deformation
Authors: Hang Gao, Xizhou Zhu, Stephen Lin, Jifeng Dai
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implement DKs as generic drop-in replacements of rigid kernels and conduct a series of empirical studies whose results conform with our theories. Over several tasks and standard base models, our approach compares favorably against prior works that adapt during runtime. In addition, further experiments suggest a working mechanism orthogonal and complementary to previous works. 4 EXPERIMENTS We evaluate our Deformable Kernels (DKs) on image classification using ILSVRC and object detection using the COCO benchmark. |
| Researcher Affiliation | Collaboration | Hang Gao1,3 , Xizhou Zhu2,3 , Steve Lin3, Jifeng Dai3 1UC Berkeley 2University of Science and Technology of China 3Microsoft Research Asia |
| Pseudocode | No | The paper describes the computation flow in Appendix A but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a URL 'http://people.eecs.berkeley.edu/ hangg/deformable-kernels/' which is a project page, not an explicit direct link to a source-code repository. There is no explicit statement about releasing the code for the methodology. |
| Open Datasets | Yes | We first train our networks on the Image Net 2012 training set (Deng et al., 2009). We examine DKs on the COCO benchmark (Lin et al., 2014). |
| Dataset Splits | Yes | Following the standard protocol, training and evaluation are performed on the 120k images in the train-val split and the 20k images in the test-dev split, respectively. For more details, please refer to our supplement. We evaluate the performance of trained models on the Image Net 2012 validation set. |
| Hardware Specification | No | The paper mentions 'CUDA' which implies NVIDIA GPUs, but does not specify any particular GPU models, CPU models, or other detailed hardware specifications used for running experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'CUDA' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Implementation Details: We implement our operators in Py Torch and CUDA. We exploit depthwise convolutions when designing our operator for better computational efficiency. We initialize kernel grids to be uniformly distributed within the scope size. For the kernel offset generator, we set its learning rate to be a fraction of that of the main network, which we cross-validate for each base model. We also find it important to clip sampling locations inside the original kernel space, such that k + k K in Equation 7. During training, we set the weight decay to be 4 10 5 rather than the common 10 4 for both models since depthwise models usually underfit rather than overfit (Xie et al., 2017; Howard et al., 2017; Hu et al., 2018). We set the learning rate multiplier of DK operators as 10 2 for Res Net-50-DW and 10 1 for Mobile Net-V2 in all of our experiments. training is performed by SGD for 90 epochs with momentum 0.9 and batch size 256. We set our learning rate of 10 1 so that it linearly warms up from zero within first 5 epochs. A cosine training schedule is applied over the training epochs. We use scale and aspect ratio augmentation with color perturbation as standard data augmentations. |