Conditional Diffusion Process for Inverse Halftoning
Authors: Hao Jiang, Yadong Mu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative experimental results demonstrate that the proposed method achieves state-of-the-art results. |
| Researcher Affiliation | Academia | Hao Jiang Peking University jianghao@stu.pku.edu.cn Yadong Mu Peking University Peng Cheng Laboratory myd@pku.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a specific repository link, an explicit code release statement, or mention of code in supplementary materials. |
| Open Datasets | Yes | We construct the training set and validation set based on the UTKFace dataset (Zhang et al., 2017) and the test set based on the VOC2012 dataset (Everingham et al., 2010). |
| Dataset Splits | Yes | There are 7, 857 images in the training set, 400 images in the validation set, and 400 images in the test set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Adam W (Loshchilov and Hutter, 2018) optimizer' and 'pre-trained VGG network', but it does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | The image size for halftone dithering and inverse halftoning is 256 256. The channel number of input halftones is 1. For halftone dithering diffusion model, the learning rate is set to 0.0001, and k is set to 20. We use 200 diffusion steps in training and testing phases. The number of model channels is 64 and the linear noise schedule is adopted throughout diffusion. We adopt Adam W (Loshchilov and Hutter, 2018) optimizer to train the halftone dithering diffusion model. For inverse halftoning diffusion model, we set the learning rate to 0.0001 and use 800 steps in the diffusion process. We set the number of model attention heads to 4. Adam W (Loshchilov and Hutter, 2018) is also used for optimization. |