Deep Spatial Adaptive Network for Real Image Demosaicing

Authors: Tao Zhang, Ying Fu, Cheng Li3326-3334

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our SANet outperforms the state-of-the-art methods under both comprehensive quantitative metrics and perceptive quality in both noiseless and noisy cases.
Researcher Affiliation Collaboration Tao Zhang,1 Ying Fu,1 Cheng Li2 1Beijing Institute of Technology 2Huawei Noah s Ark Lab {tzhang,fuying}@bit.edu.cn, licheng89@huawei.com
Pseudocode No The paper describes the network architecture and operations in text and figures, but it does not include a formal pseudocode block or algorithm section.
Open Source Code No The paper does not include an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets No The paper describes collecting a new dataset ('we collect a paired real demosaicing dataset' and 'To support the research, we employ a pixel shift camera to capture a real paired mosaic and full color RGB images dataset'), but it does not provide any concrete access information (link, DOI, repository, or formal citation with author/year) for public availability.
Dataset Splits No The paper mentions randomly cropping 256x256 regions for training and evaluating on the captured dataset, but it does not explicitly provide specific training, validation, and test dataset splits with percentages, sample counts, or predefined citations.
Hardware Specification No The paper mentions using a 'Sony A7R4 digital camera' for data capture, but it does not provide specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the deep learning experiments.
Software Dependencies No The paper states 'Our implementation is based on Py Torch (Paszke et al. 2019)', but it does not provide a specific version number for PyTorch or any other ancillary software dependencies.
Experiment Setup Yes The kernel size K is set to be 5, and the decomposed kernel size K1 and K2 are set to be 3 and 3 for all spatial adaptive convolution, respectively. In the training stage, we randomly crop overlapped 256 256 spatial regions from image in our paired real demosaicing dataset. Our implementation is based on Py Torch (Paszke et al. 2019). The models are trained with Adam optimizer (Kingma and Ba 2014) (β1 = 0.9 and β2 = 0.999) for 100 epochs. The initial learning rate and mini-batch size are set to 1 10 4 and 1, respectively.