Revisiting Context Aggregation for Image Matting

Authors: Qinglin Liu, Xiaoqian Lv, Quanling Meng, Zonglin Li, Xiangyuan Lan, Shuo Yang, Shengping Zhang, Liqiang Nie

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on five popular matting datasets demonstrate that the proposed AEMatter outperforms state-of-the-art matting methods by a large margin. The source code is available at https: //github.com/aipixel/AEMatter. (Abstract) In this section, we perform experimental analyses on existing matting networks and basic encoder-decoder matting networks to explore the context aggregation mechanisms of matting networks and identify the key factors contributing to the performance of matting networks. (Section 3)
Researcher Affiliation Academia 1School of Computer Science and Technology, Harbin Institute of Technology, Weihai, China 2Peng Cheng Laboratory, Shenzhen, China 3Department of Computer Science, The University of Hong Kong, Hong Kong, China 4School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks. It describes the architecture and training strategy in text and diagrams.
Open Source Code Yes The source code is available at https: //github.com/aipixel/AEMatter. (Abstract)
Open Datasets Yes The training is conducted on the Adobe Composition-1K dataset (Xu et al., 2017) (Appendix B. Implementation details of AEMatter)
Dataset Splits Yes All compared methods are first trained on image patches with sizes of 256 × 256, 512 × 512, 768 × 768, and 1024 × 1024, and then evaluated on the validation set. (Section 3.1)
Hardware Specification Yes The training is conducted on the Adobe Composition-1K dataset (Xu et al., 2017), using an NVIDIA RTX 3090 GPU with a batch size of 2 for 100 epochs. (Appendix B. Implementation details of AEMatter)
Software Dependencies Yes The proposed AEMatter is implemented using the PyTorch (Paszke et al., 2019) framework. (Appendix B. Implementation details of AEMatter)
Experiment Setup Yes The training is conducted... with a batch size of 2 for 100 epochs. An RAdam optimizer (Liu et al., 2020) is employed to optimize the network weights with weight decay of 10 6 and betas of (0.5, 0.999). The initial learning rate is set to 2.5 × 10 5 and decays to zero using a cosine annealing scheduler. (Appendix B. Implementation details of AEMatter)