Cross-channel Communication Networks
Authors: Jianwei Yang, Zhile Ren, Chuang Gan, Hongyuan Zhu, Devi Parikh
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on multiple vision tasks show that our proposed block brings improvements for different CNN architectures, and learns more diverse and complementary representations. |
| Researcher Affiliation | Collaboration | 1Georgia Institute of Technology, 2Facebook AI Research, 3 MIT-IBM Watson AI Lab, 4Institute for Infocomm Research, A*Star, Singapore |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'We refer to [26] for the implementation of Faster R-CNN. https://github.com/jwyang/faster-rcnn.pytorch, 2017.' However, it does not explicitly provide a link to the open-source code for the methodology described in this paper (the C3 block itself), nor does it state that their code will be released. |
| Open Datasets | Yes | We conduct experiments on two popular benchmarks: 1) CIFAR-100 [16]... (2) Image Net [21]... We use Faster R-CNN [20] for object detection on the COCO dataset [19], and Deeplab-V2 [2] for semantic segmentation on the Pascal VOC dataset [5]. |
| Dataset Splits | Yes | Image Net [21], which has 1000 classes and more than 1.28M images for training, and 50K for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments. |
| Experiment Setup | Yes | Specifically, we use stochastic gradient descent (SGD) with an initial learning rate 0.1, momentum 0.99, and weight decay 1e-4 for both datasets. The learning rate is decayed by 10 after 100 and 140 epochs for CIFAR-100, and 30 and 60 for Image Net. We report the average best accuracy of 5 runs. |