ShadowFormer: Global Context Helps Shadow Removal
Authors: Lanqing Guo, Siyu Huang, Ding Liu, Hao Cheng, Bihan Wen
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on three popular public datasets, including ISTD, ISTD+, and SRD, to evaluate the proposed method. Our method achieves state-of-the-art performance by using up to 150 fewer model parameters. Experimental results show that the proposed Shadow Former models can generate superior results consistently over the three widely-used shadow removal datasets by significantly outperforming the state-of-the-art methods using 5 to 150 fewer model parameters. |
| Researcher Affiliation | Collaboration | Lanqing Guo1, Siyu Huang2, Ding Liu3, Hao Cheng1, Bihan Wen1* 1Nanyang Technological University, Singapore 2Harvard University, USA 3Byte Dance Inc, USA |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/GuoLanqing/ShadowFormer |
| Open Datasets | Yes | We work with three benchmark datasets for the various shadow removal experiments: (1) ISTD (Wang, Li, and Yang 2018) dataset includes 1330 training and 540 testing triplets (shadow images, masks, and shadow-free images). (2) Adjusted ISTD (ISTD+) dataset (Le and Samaras 2019) reduces the illumination inconsistency between the shadow and shadow-free image of ISTD by the image processing algorithm, which has the same number of triplets with ISTD. (3) SRD (Qu et al. 2017) dataset consists of 2680 training and 408 testing pairs of shadow and shadow-free images without the ground truth shadow masks. |
| Dataset Splits | No | The paper mentions 'training' and 'testing' datasets with specific counts, but it does not specify any 'validation' splits with counts or percentages for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100, RTX 2080 Ti) or CPU models (e.g., Intel Core i7, Xeon). |
| Software Dependencies | No | The proposed Shadow Former is implemented using Py Torch. Following (Vaswani et al. 2017), we train our model using Adam W optimizer (Loshchilov and Hutter 2017). The paper mentions PyTorch and Adam W optimizer but does not specify their version numbers or any other key software components with versions. |
| Experiment Setup | Yes | The proposed Shadow Former is implemented using Py Torch. Following (Vaswani et al. 2017), we train our model using Adam W optimizer (Loshchilov and Hutter 2017) with the momentum as (0.9, 0.999) and the weight decay as 0.02. The initial learning rate is 2e 4, then gradually reduces to 1e 6 with the cosine annealing (Loshchilov and Hutter 2016). We set the σ = 0.2 in our experiments. We set the first feature embedding dimension as C = 32 and C = 24, for Ours-Large and Ours-Small, respectively. |