Learning Segmentation Masks with the Independence Prior
Authors: Songmin Dai, Xiaoqiang Li, Lu Wang, Pin Wu, Weiqin Tong, Yimin Chen3429-3436
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our framework in two cases: (1) Foreground segmentation on category-specific images with box-level annotation. (2) Unsupervised learning of instance appearances and masks with only one image of homogeneous object cluster (HOC). We get appealing results in both tasks, which shows the independence prior is useful for instance segmentation and it is possible to unsupervisedly learn instance masks with only one image. |
| Researcher Affiliation | Academia | Songmin Dai, Xiaoqiang Li, Lu Wang, Pin Wu, Weiqin Tong, Yimin Chen School of Computer Engineering and Science, Shanghai University, China Shanghai Institute for Advanced Communication and Data Science, Shanghai University, China {laodar, xqli, luwang, wupin, wqtong, ymchen}@shu.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluate our foreground segmentation method on the Celeb A (Liu et al. 2015) and Caltech-200 bird (Wah et al. 2011) datasets. We construct the reference background images for Celeb A by sampling patches with the same size from INRIAPerson (Dalal and Triggs 2005) dataset's negative images. |
| Dataset Splits | No | The paper mentions using Celeb A and Caltech-200 bird datasets and training models, but it does not specify explicit training, validation, or test splits in percentages or sample counts, nor does it refer to predefined splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions using "Tensor Flow (Abadi et al. 2016)", "SN-GAN (Miyato et al. 2018)", "UNet (Ronneberger, Fischer, and Brox 2015)", and "ADAM (Kingma and Ba 2014) optmizer". However, it does not provide specific version numbers for any of these software components. |
| Experiment Setup | Yes | All models are trained with the ADAM (Kingma and Ba 2014) optmizer with β1 = 0.5 and β2 = 0.999. The learning rate is fixed as 0.0002 for discriminators and 0.0004 for generators. See Appendix for further details. |