End-to-End Unpaired Image Denoising with Conditional Adversarial Networks
Authors: Zhiwei Hong, Xiaocheng Fan, Tao Jiang, Jianxing Feng4140-4149
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental evaluation has been performed on both synthetic and real data including real photographs and computer tomography (CT) images. The results demonstrate that our model outperforms the previous models trained on unpaired images as well as the state-of-the-art methods based on paired training data when proper training pairs are unavailable. |
| Researcher Affiliation | Collaboration | Zhiwei Hong,1 Xiaochen Fan,2 Tao Jiang,3,1* Jianxing Feng2 1Tsinghua University, 2Haohua Technology Co., Ltd 3University of California, Riverside |
| Pseudocode | No | The paper describes the architectures of the generative, discriminative, and denoising networks using figures (Fig. 3, 4, 5) and textual descriptions, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | To train and test the model, we crop images into 64 64 patches in all experiments. (1) For denoising synthethic Gaussian noise, we use the 400 images of size 180 180 in (Chen and Pock 2017) for training. ... we also train our model on color images from the BSD500 dataset (Arbelaez et al. 2011). ... (2) For image denoising on real photographs, we choose the benchmark Smartphone Image Denoising Dataset (SIDD) (Abdelhamed, Lin, and Brown 2018)... (3) For low-dose CT image denoising, a real clinical dataset from the 2016 NIH-AAPM-Mayo Clinic Low Dose CT Grand Challenge authorized by Mayo Clinic (AAPM 2016) is utilized to train and evaluate our model. |
| Dataset Splits | No | The paper describes the training and test data for its experiments but does not explicitly mention or detail a separate validation set or its specific split percentage/methodology for reproducibility. |
| Hardware Specification | Yes | It takes seven to nine hours to train our model on a single Nvidia Ge Force GTX 1080 Ti GPU. |
| Software Dependencies | No | The paper mentions using the Adam optimization algorithm and ReLU activation but does not specify version numbers for any software frameworks (e.g., TensorFlow, PyTorch) or other critical libraries used in the implementation. |
| Experiment Setup | Yes | In the noise learning model, the λ in Eqn. 2 is set to 10 and local mean kernel size is set to 3 3. ...We use the Adam (Kingma and Ba 2014) optimization algorithm with β1 = 0.5 and initial learning rate 1.0 10 4 to train UIDNet. Depending on the training dataset size, we train the model for 100, 20 and 50 epochs on the synthetic data, real photographs and low-dose CT images, respectively. |