Infusing Definiteness into Randomness: Rethinking Composition Styles for Deep Image Matting

Authors: Zixuan Ye, Yutong Dai, Chaoyi Hong, Zhiguo Cao, Hao Lu

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments under controlled conditions on four deep matting baselines, including Index Net Matting (Lu et al. 2019), GCA Matting (Li and Lu 2020), A2U Matting (Dai, Lu, and Shen 2021) and Matte Former (Park et al. 2022), show that our composition styles indicate a clear advantage against previous composition styles, e.g., 12.7% 17.9% relative improvement in the gradient metric on Index Net Matting.
Researcher Affiliation Academia 1 School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, China 2 Australian Institute for Machine Learning, The University of Adelaide, Australia
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Code is available at: https://github.com/coconuthust/composition styles
Open Datasets Yes We train models on the synthetic Adobe Image Matting (Xu et al. 2017) dataset and report performance on both the synthetic Composition-1K (Xu et al. 2017) dataset and the real-world AIM-500 (Li, Zhang, and Tao 2021) and PPM-100 (Ke et al. 2022) datasets.
Dataset Splits No The paper mentions training on Adobe Image Matting and reporting performance on Composition-1K, AIM-500, and PPM-100, but does not explicitly detail training/validation/test dataset splits (e.g., percentages or sample counts for validation).
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper states 'Our implementation is based on Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup No The paper mentions applying 'the same data augmentation strategies used in GCA Matting' and providing 'the same amount of training samples' but does not provide specific hyperparameters like learning rate, batch size, or epochs.