High-Resolution Deep Image Matting
Authors: Haichao Yu, Ning Xu, Zilong Huang, Yuqian Zhou, Humphrey Shi3217-3224
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the effectiveness of the proposed method and its necessity for highresolution inputs. Our HDMatt approach also sets new stateof-the-art performance on Adobe Image Matting and Alpha Matting benchmarks and produce impressive visual results on more real-world high-resolution images. |
| Researcher Affiliation | Collaboration | Haichao Yu1, Ning Xu2, Zilong Huang1, Yuqian Zhou1, Humphrey Shi1,3 1UIUC, 2Adobe Research, 3University of Oregon |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that code is released. |
| Open Datasets | Yes | We trained our models on Adobe Image Matting (AIM) dataset (Xu et al. 2017). ... The synthetic training images will be the compositions of a foreground images in augmented AIM training set with a randomly sampled background image from COCO dataset (Lin et al. 2014). |
| Dataset Splits | No | The paper mentions training on the AIM dataset and testing on AIM and Alpha Matting benchmarks, but does not explicitly describe a validation split or specific train/validation/test partitioning methodology for reproduction. |
| Hardware Specification | No | The paper mentions 'GPU memory' as a hardware limitation, but does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers needed to replicate the experiment. |
| Experiment Setup | Yes | Adam (Kingma and Ba 2014) optimizer was used with initial learning rate 0.5 10 3 and decayed by cosine scheduler. The model is trained for 200k steps with batch size 32 and weight decay 10 4. |