Recurrence along Depth: Deep Convolutional Neural Networks with Recurrent Layer Aggregation
Authors: Jingyu Zhao, Yanwen Fang, Guodong Li
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our effectiveness is verified by our extensive experiments on image classification, object detection and instance segmentation tasks. Specifically, improvements can be uniformly observed on CIFAR, Image Net and MS COCO datasets |
| Researcher Affiliation | Academia | Jingyu Zhao, Yanwen Fang and Guodong Li Department of Statistics and Actuarial Science The University of Hong Kong {gladys17, u3545683}@connect.hku.hk, gdli@hku.hk |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. Figure 1 is a schematic diagram, not pseudocode. |
| Open Source Code | Yes | Our implementation and weights are available at https://github.com/fangyanwen1106/RLANet. |
| Open Datasets | Yes | We verify the effectiveness of our RLA module in Figure 3 on image classification, object detection and instance segmentation tasks using CIFAR, Image Net and MS COCO datasets. |
| Dataset Splits | Yes | We adopt a train/validation split of 45k/5k and follow the widely used data preprocessing scheme in [20, 28]. |
| Hardware Specification | Yes | All experiments are implemented on four Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions using the MMDetection toolkit and implicitly uses frameworks like PyTorch (given the nature of deep learning research) but does not provide specific version numbers for software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | All the networks are trained from scratch using Nesterov SGD with momentum of 0.9, l2 weight decay of 10-4, and a batch size of 128 for 300 epochs. The initial learning rate is set to 0.1 and is divided by 10 at 150 and 225 epochs. |