How much Position Information Do Convolutional Neural Networks Encode?
Authors: Md Amirul Islam*, Sen Jia*, Neil D. B. Bruce
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A comprehensive set of experiments show the validity of this hypothesis and shed light on how and where this information is represented while offering clues to where positional information is derived from in deep CNNs. |
| Researcher Affiliation | Academia | Md Amirul Islam ,1,2 , Sen Jia ,1 , Neil D. B. Bruce1,2 1Ryerson University, Canada 2Vector Institute for Artificial Intelligence, Canada amirul@scs.ryerson.ca, sen.jia@ryerson.ca, bruce@ryerson.ca |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use the DUT-S dataset (Wang et al., 2017) as our training set, which contains 10, 533 images for training. Following the common training protocol used in (Zhang et al., 2017; Liu et al., 2018), we train the model on the training set of DUT-S and evaluate the existence of position information on the natural images of the PASCAL-S (Li et al., 2014) dataset. |
| Dataset Splits | No | The paper mentions training on the DUT-S dataset and evaluating on PASCAL-S, implying distinct training and testing data, but does not specify explicit validation dataset splits (e.g., percentages, counts, or explicit 'validation set' mentions) within either dataset or as a separate split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or cloud instance specifications) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software components (e.g., libraries, frameworks, or programming languages like Python 3.x, TensorFlow x.x, PyTorch x.x) used in the experiments. |
| Experiment Setup | Yes | We initialize the architecture with a network pretrained for the Image Net classification task. The new layers in the position encoding branch are initialized with xavier initialization (Glorot & Bengio, 2010). We train the networks using stochastic gradient descent for 15 epochs with momentum of 0.9, and weight decay of 1e 4. We resize each image to a fixed size of 224 224 during training and inference. |