Active Matting

Authors: Xin Yang, Ke Xu, Shaozhe Chen, Shengfeng He, Baocai Yin Yin, Rynson Lau

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we show that the proposed model reduces user efforts significantly and achieves comparable performance to dense trimaps in a user-friendly manner. 3 Experiments
Researcher Affiliation Academia Xin Yang Dalian University of Technology City University of Hong Kong xinyang@dlut.edu.cn Ke Xu Dalian University of Technology City University of Hong Kong kkangwing@mail.dlut.edu.cn Shaozhe Chen Dalian University of Technology csz@mail.dlut.edu.cn Shengfeng He South China University of Technology hesfe@scut.edu.cn Baocai Yin Dalian University of Technology ybc@dlut.edu.cn Rynson W.H. Lau City University of Hong Kong rynson.lau@cityu.edu.hk
Pseudocode No The paper describes the model architecture and training process in text and diagrams (Figure 2) but does not include explicit pseudocode or algorithm blocks.
Open Source Code No The paper provides a link for a dataset (rendered-100 dataset) but does not explicitly state that the source code for the described methodology is released or available.
Open Datasets Yes We train our model using the training set of the portrait dataset. To avoid overfitting, we propose a rendered-100 dataset for fine tuning, which has 100 images and their corresponding ground truth mattes. We use 90 images for fine tuning with data augmentation, and 10 images for testing. The portrait dataset contains 1,700 training images, 300 testing images, and their corresponding ground truth mattes. The matting benchmark consists of 27 images with user-defined trimaps and ground truth mattes. The complete render-100 dataset (including rendered images, extracted foreground objects, and ground-truth alpha mattes) can be found at http://www.cs.cityu.edu.hk/~rynson/projects/matting/Active Matting.html
Dataset Splits No The paper mentions training and testing sets, and fine-tuning with a portion of a dataset, but it does not explicitly define or refer to a separate validation set for hyperparameter tuning or early stopping.
Hardware Specification Yes Our active matting model is implemented using Tensorflow [1], and trained and tested on a PC with an i7-6700K CPU, 8G RAM and an NVIDIA GTX 1080 GPU.
Software Dependencies No The paper mentions using TensorFlow for implementation and adapting the VGG16 network, but it does not provide specific version numbers for these software components or any other libraries.
Experiment Setup Yes The input images are resized to 400x400. We train our network from scratch with the truncated normal initializer. The learning rate is set to 10^-3 initially and then goes through an exponential decay to the minimum learning rate 10^-4. We also crop the gradients to prevent gradient explosion. As a result, we set S to 2 during training, which achieves a good balance between accuracy and efficiency. We fix S to 1 during inference.