SWBNet: A Stable White Balance Network for sRGB Images

Authors: Chunxiao Li, Xuejing Kang, Zhifeng Zhang, Anlong Ming

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments indicate that our SWBNet achieves stable and remarkable WB performance. ... We compared the proposed SWBNet with several state-of-the-art s RGB-WB methods, including KNN-WB (Afifi et al. 2019a), DWB (Afifi and Brown 2020a), Interactive WB (Afifi and Brown 2020b) and Mixed WB (Afifi, Brubaker, and Brown 2022). The comparison results are in Table.2, Fig.2 and Table.3 respectively.
Researcher Affiliation Academia Chunxiao Li, Xuejing Kang, Zhifeng Zhang, Anlong Ming* School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications {chunxiaol,kangxuejing,zhangzhifeng,mal}@bupt.edu.cn
Pseudocode No The paper describes its method through textual descriptions, block diagrams (Fig. 4), and mathematical equations (e.g., Eq. 1-8), but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the methodology described.
Open Datasets Yes Following (Afifi and Brown 2020a), we randomly selected 12000 s RGB images from Rendered WB dataset (Afifi et al. 2019a) for training and used Rendered Cube and Mixedscene datasets for evaluation.
Dataset Splits No The paper mentions selecting 12000 s RGB images from Rendered WB dataset for training and using Rendered Cube and Mixedscene datasets for evaluation, but does not provide specific training/validation/test dataset split percentages, sample counts, or explicit validation set details for reproducibility.
Hardware Specification No The paper states 'We implement the SWBNet on Pytorch with CUDA support' but does not specify any particular hardware model (e.g., GPU, CPU models, memory details) used for running the experiments.
Software Dependencies No The paper states 'We implement the SWBNet on Pytorch with CUDA support' and mentions using 'Adam' and 'Adam W optimizer', but does not provide specific version numbers for PyTorch, CUDA, or other key software dependencies.
Experiment Setup Yes To train the CTIF extractor and decoder, we use the Adam (Kingma and Ba 2015) with β1 = 0.9. We set the learning rate as 1 10 4 and then decay it to 1 10 5 after 150 epochs. To train the whole model, we use the Adam W optimizer (Loshchilov and Hutter 2019) with weight-decay 10 2. We set batch size as 64 and train all modules for 200 epochs in two phases. Following (Afifi and Brown 2020a), we randomly select four 128 128 patches from each image and their corresponding GTs for training and apply geometric rotation and flipping as data augmentations to avoid overfitting.