Privileged Prior Information Distillation for Image Matting

Authors: Cheng Lyu, Jiake Xie, Bo Xu, Cheng Lu, Han Huang, Xin Huang, Ming Wu, Chuang Zhang, Yong Tang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the effectiveness and superiority of our PPID on image matting. The code will be released soon. [...] We conduct a comparative study on Real-1K and two composition benchmarks: Adobe Image Matting (Xu et al. 2017) and Distinction-646 (Qiao et al. 2020). We report mean square error (MSE), sum of the absolute difference (SAD), spatial-gradient (Grad), and connectivity (Conn) between predicted and ground truth alpha mattes.
Researcher Affiliation Collaboration Cheng Lyu1*, Jiake Xie2*, Bo Xu3* , Cheng Lu3, Han Huang4, Xin Huang5, Ming Wu1, Chuang Zhang1, Yong Tang2 1Beijing University of Posts and Telecommunications 2Pic Up.Ai 3Xpeng 4AI2 Robotics 5Towson University
Pseudocode No The paper describes the proposed modules (Cross-layer Semantic Distillation and Attention-Guided Local Distillation) using mathematical formulations and descriptive text, but it does not include any pseudocode blocks or explicitly labeled algorithm sections.
Open Source Code No The code will be released soon.
Open Datasets Yes We propose the first large-scale UHD (Ultra High Definition) natural image matting test set Real-World Image Matting-1K (Real-1K), which contains 1000 ultra high-resolution (from 4K to 8K) real-world natural samples of transparent and non-transparent attributes. Table 1 shows comparisons between some existing image matting datasets (DAPM (Shen et al. 2016), Adobe (Xu et al. 2017), Dist-646 (Qiao et al. 2020), AM-2k (Li et al. 2022b), AIM (Li, Zhang, and Tao 2021), and RWP-636 (Yu et al. 2021)) with ours. [...] Adobe Matting Dataset (Xu et al. 2017). The training set consists of 431 foreground objects and each of them is composited over 100 random COCO (Lin et al. 2014) images to produce 43.1k composited training images. [...] Distinction-646 (Qiao et al. 2020). It includes 596 and 50 foreground objects in training and test sets, respectively. [...] For testing on Real-1K, we train all models on the combined training set of Adobe (Xu et al. 2017) and Distinction-646 (Qiao et al. 2020).
Dataset Splits Yes Adobe Matting Dataset (Xu et al. 2017). The training set consists of 431 foreground objects and each of them is composited over 100 random COCO (Lin et al. 2014) images to produce 43.1k composited training images. For the test set, we first composite each foreground from the test set with 20 random VOC (Everingham et al. 2010) images to produce 1k composited testing images (Composition-1K). Then we split Composition-1K into two groups (240 and 760 images, respectively) based on the critical attributes of transparent and non-transparent.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., CPU/GPU models, memory specifications).
Software Dependencies No The paper states: 'All the experiments are conducted with Pytorch(Paszke et al. 2019).'. While it mentions PyTorch, it does not provide version numbers for PyTorch itself or any other relevant software libraries or dependencies.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or other training configurations.