Tap and Shoot Segmentation

Authors: Ding-Jie Chen, Jui-Ting Chien, Hwann-Tzong Chen, Long-Wen Chang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results on various datasets show that, by training a deep convolutional network to integrate the selection and focus/defocus cues, our method can achieve higher segmentation accuracy in comparison with existing interactive segmentation methods.
Researcher Affiliation Academia Ding-Jie Chen, Jui-Ting Chien, Hwann-Tzong Chen, Long-Wen Chang National Tsing Hua University, Taiwan {djchen.tw, ydnaandy123}@gmail.com , {htchen, lchang}@cs.nthu.edu.tw
Pseudocode No The paper describes the network architecture and training process in text and diagrams (Figure 2), but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper provides links to code for *other* interactive segmentation algorithms (e.g., Grab Cut, Random Walks) but does not provide a link or statement about open-sourcing its own code.
Open Datasets Yes We evaluate all algorithms on four public datasets. Each image contains one foreground region with pixel-level ground-truth labeling. Grab Cut dataset (Rother, Kolmogorov, and Blake 2004): It contains 50 natural images. Berkeley dataset (Mc Guinness and O Connor 2010): It contains 100 images. The images are from the popular Berkeley dataset (Martin et al. 2001). Extended complex scene saliency dataset (ECSSD) (Shi et al. 2016): The dataset contains 1,000 natural images. MSRA10K dataset (Cheng et al. 2015a): This dataset contains 10,000 natural images.
Dataset Splits Yes We partition the dataset into three nonoverlapping subsets with the numbers of 8,000, 1,000, and 1,000 for training, validation, and testing.
Hardware Specification Yes All algorithms are run on the same environment (Intel i7-4770 3.40 GHz CPU, 8GB RAM, NVIDIA Titan X GPU).
Software Dependencies No The paper mentions 'We implement all of them in Tensorflow' but does not specify the version number for TensorFlow or any other software dependencies.
Experiment Setup Yes All models are optimized using ADAM algorithm with same learning rate 0.0001. The batch size is 9 and the network is running on Titan X. We also apply dropout layers on the layers Deconv8 to Deconv5 for avoiding over-fitting. The dropout probability is 0.2 during training and 0.0 during testing.