Learning Generalized Intersection Over Union for Dense Pixelwise Prediction

Authors: Jiaqian Yu, Jingtao Xu, Yiwei Chen, Weiming Li, Qiang Wang, Byungin Yoo, Jae-Joon Han

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show consistent performance improvements by learning Pix Io U over the original Io U for several different pixelwise prediction tasks on Pascal VOC, VOT-2020 and Cityscapes.
Researcher Affiliation Industry 1Samsung Research China Beijing, Beijing, China 2Samsung Advanced Institute of Technology, Suwon, South Korea.
Pseudocode Yes The calculation of Pix Io U is summarized in Algorithm 1 (c.f. the supplementary materials for a pseudo code). ... Algorithm 2 Gradient computation of the Lov asz Pix Io U
Open Source Code No The paper mentions using PyTorch and Detectron2 frameworks, but does not provide specific links or statements about the availability of their own implemented source code for the described methodology.
Open Datasets Yes Experimental results show consistent performance improvements by learning Pix Io U over the original Io U for several different pixelwise prediction tasks on Pascal VOC, VOT-2020 and Cityscapes. ... We first experiment on the VOT2020 1, a pixelwise object tracking benchmark. ... We then perform a semantic segmentation task on Pascal VOC 2012. ... Next, we train and test on the Cityscapes (Cordts et al., 2016), a large-scale dataset
Dataset Splits No The paper mentions evaluating on "Pascal VOC val set" and "Cityscapes val set" but does not provide specific details on the dataset splits, such as percentages, sample counts, or the methodology used for creating these splits, beyond just naming the sets.
Hardware Specification Yes All training procedures are carried on with batchsize 16 on 8 P40
Software Dependencies No The paper mentions using "Py Torch framework" and "Detectron2 system" but does not provide specific version numbers for these software components.
Experiment Setup Yes SGD is used for the optimization with a polynomial learning rate policy 2.5 10 4 (1 iter/max iter)0.9, with momuntum 0.9 and weight decay 1 10 4. We train 50 epochs on 2 GPUs with a batch size of 16. ... All training procedures are carried on with batchsize 16 on 8 P40, with a polynomial learning rate policy 0.008 (1 iter/maxiter)0.9 and a warming-up by 1k iterations for the 90k-iteration training.