Natural Image Matting via Guided Contextual Attention

Authors: Yaoyi Li, Hongtao Lu11450-11457

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results on Composition-1k testing set and alphamatting.com benchmark dataset demonstrate that our method outperforms state-of-the-art approaches in natural image matting.
Researcher Affiliation Academia Department of Computer Science and Engineering, Shanghai Jiao Tong University, China {dsamuel, htlu}@sjtu.edu.cn
Pseudocode No The paper does not include any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes Code and models are available at https://github.com/Yaoyi-Li/GCA-Matting.
Open Datasets Yes The network is trained for 200, 000 iterations with a batch size of 40 in total on the Adobe Image Matting dataset (Xu et al. 2017). ... Finally, we randomly select one background image from MS COCO dataset (Lin et al. 2014) for each foreground patch and composite them to get the input image.
Dataset Splits No The paper explicitly mentions using 'Composition-1k testing set' and training on 'Adobe Image Matting dataset', which implies standard splits for these datasets. However, it does not explicitly specify the details of a validation split (e.g., percentages, sample counts, or a dedicated 'validation set').
Hardware Specification Yes Additionally, our proposed method can evaluate each image in Composition-1k testing dataset as a whole on a single Nvidia GTX 1080 with 8GB memory.
Software Dependencies No The paper mentions using 'Adam optimizer' but does not specify version numbers for any software libraries, frameworks (e.g., TensorFlow, PyTorch), or programming languages used for implementation.
Experiment Setup Yes The network is trained for 200, 000 iterations with a batch size of 40 in total on the Adobe Image Matting dataset (Xu et al. 2017). We perform optimization using Adam optimizer (Kingma and Ba 2014) with β1 = 0.5 and β2 = 0.999. The learning rate is initialized to 10 4. Warmup and cosine decay (Loshchilov and Hutter 2016; Goyal et al. 2017; He et al. 2019) are applied to the learning rate.