Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Polarization Guided Mask-Free Shadow Removal

Authors: Chu Zhou, Chao Xu, Boxin Shi

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our Pol-Sha Re achieves state-of-the-art performance on both synthetic and real-world images. ... We compare our results to five latest learning-based shadow removal methods ... Visual quality comparisons on synthetic data are shown in Fig. 4. ... We also adopt PSNR, SSIM, and RMSE (the root mean square error in the LAB color space) to evaluate the results on synthetic data quantitatively ... Results are shown in Tab. 1. Our method consistently outperforms the compared ones on all metrics. ... We conduct a series of ablation studies to verify the validity of each design choice. As shown in Tab. 2, our complete model achieves the first performance.
Researcher Affiliation Academia Chu Zhou1 , Chao Xu2, Boxin Shi3,4 # 1National Institute of Informatics, Japan 2National Key Laboratory of General Artificial Intelligence, School of IST, Peking University, China 3State Key Laboratory for Multimedia Information Processing, School of CS, Peking University, China 4National Engineering Research Center of Visual Technology, School of CS, Peking University, China zhou EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes the methodology using textual explanations and network architecture diagrams (Figure 3), but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code, nor does it provide a link to a code repository.
Open Datasets No All compared methods are retrained on our synthetic dataset for a fair comparison. Information about our synthetic dataset can be found in the supplementary material. ... To demonstrate the generalization ability, we capture several real images containing shadows. The paper mentions a 'synthetic dataset' and 'real images' but only states that 'Information about our synthetic dataset can be found in the supplementary material', which is not concrete access information (e.g., a link, DOI, or specific file names).
Dataset Splits No The paper mentions evaluating on a 'synthetic dataset' and comparing methods on it, but does not specify any training, validation, or test dataset splits (e.g., percentages or sample counts).
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, or other computer specifications used for running the experiments.
Software Dependencies No The paper does not specify any particular software, libraries, or programming language versions used for the implementation or experiments.
Experiment Setup Yes The total loss function of our network L consists of two terms: modulation loss Lmod and image loss Limg, which is defined as L(m, mgt, I, Igt) = Lmod(m, mgt) + Ξ²Limg(I, Igt), (13) where Ξ² is empirically set to be 0.1, the subscript gt denotes the ground truth, and both Lmod and Limg are defined as the following basic loss function ... where L1, L2, Lperc, and Lgrad denote the β„“1, β„“2, perceptual, and gradient loss (β„“2 loss in the gradient domain) respectively, and Ξ²1,2,3,4 are empirically set to be 10, 100, 0.1, and 10 respectively. The perceptual loss Lperc is defined as the β„“2 loss computed using the feature maps of V GG3,3 convolution layer of VGG-19 network (Simonyan and Zisserman 2014) pretrained on Image Net (Russakovsky et al. 2015).