dugMatting: Decomposed-Uncertainty-Guided Matting
Authors: Jiawei Wu, Changqing Zhang, Zuoyong Li, Huazhu Fu, Xi Peng, Joey Tianyi Zhou
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensively quantitative and qualitative results validate that the proposed method significantly improves the original matting algorithms in terms of both efficiency and efficacy. We conduct extensive experiments on standard natural matting dataset Composition-1k (Xu et al., 2017) and the real-world portrait dataset P3M-10K (Li et al., 2021). |
| Researcher Affiliation | Academia | 1College of Mechanical and Electrical Engineering, Fujian Agriculture and Forestry University, Fuzhou, China 2College of Intelligence and Computing, Tianjin University, Tianjin, China 3Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou, China 4Institute of High Performance Computing, Agency for Science, Technology and Research, Singapore 5College of Computer Science, Sichuan University, Chengdu, China 6Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), Singapore. |
| Pseudocode | Yes | Algorithm 1 Uncertainty-Guided Interaction. Input: Epistemic uncertainty uepis, predicted matte \gamma, input image x, threshold t, patch number K, and selection number N. Initialization: Initialize the user map U. Divide uepis into K K patches. Compute the patch-level uncertainty up RK K + . P Top N uncertainty patches from {up|up > t}. for p in P do Calculate the index I of p in input image x. Users select a label from foreground, background, or transition for x[I]. Update user map U. end Output: User map U. |
| Open Source Code | Yes | Our implementation 1 is based on the open source framework Pytorch . All the experiments were run on two Ge Force RTX 3090 GPUs. 1Code is available at https://github.com/ Fire-friend/dug Matting. |
| Open Datasets | Yes | We conduct extensive experiments on standard natural matting dataset Composition-1k (Xu et al., 2017) and the real-world portrait dataset P3M-10K (Li et al., 2021). |
| Dataset Splits | No | The paper mentions 43,100 training images and 1000 testing images for Composition-1k, and 9,421 training images and 500 testing images for P3M-10K, but does not specify a separate validation split or its size. |
| Hardware Specification | Yes | All the experiments were run on two Ge Force RTX 3090 GPUs. |
| Software Dependencies | No | The paper states 'Our implementation is based on the open source framework Pytorch' but does not specify the version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | All models are optimized using the Adam optimizer (Kingma & Ba, 2014), and the base learning rate is set to 1 10 3 with the cosine learning rate scheduler (He et al., 2019), 100 epochs iteration, and batch size of 16. |