Learning Fully Dense Neural Networks for Image Semantic Segmentation

Authors: Mingmin Zhen, Jinglu Wang, Lei Zhou, Tian Fang, Long Quan9283-9290

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have demonstrated the best performance of the FDNet on the two benchmark datasets: PASCAL VOC 2012, NYUDv2 over previous works when not considering training on other datasets.
Researcher Affiliation Collaboration Mingmin Zhen,1 Jinglu Wang,2 Lei Zhou,1 Tian Fang,3 Long Quan1 1Hong Kong University of Science and Technology, 2Microsoft Research Asia, 3Altizure.com {mzhen, lzhouai, quan}@cse.ust.hk, Jinglu.Wang@microsoft.com, fangtian@altizure.com
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing source code or a link to a code repository.
Open Datasets Yes We conduct comprehensive experiments on PASCAL VOC 2012 dataset (Everingham et al. 2010) and NYUDv2 dataset (Silberman et al. 2012).
Dataset Splits Yes PASCAL VOC 2012: The dataset has 1,464 images for training, 1,449 images for validation and 1,456 images for testing, which involves 20 foreground object classes and one background class... NYUDv2: We use the standard training/test split with 795 and 654 images, respectively.
Hardware Specification Yes The proposed FDNet is implemented with PyTorch on a single NVIDIA GTX 1080Ti.
Software Dependencies No The paper mentions 'The proposed FDNet is implemented with PyTorch', but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup Yes We train the dataset with 30K iterations. We optimize the network by using the poly learning rate policy where the initial learning rate is multiplied by (1 iter max iter)power with power = 0.9. The initial learning rate is set to 0.00025. We set momentum to 0.9 and weight decay to 0.0005.