Learning to Deblur Face Images via Sketch Synthesis

Authors: Songnan Lin, Jiawei Zhang, Jinshan Pan, Yicun Liu, Yongtian Wang, Jing Chen, Jimmy Ren11523-11530

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyze the effectiveness of each component on face image deblurring and show that the proposed algorithm is able to deblur face images with favorable performance against state-of-the-art methods.
Researcher Affiliation Collaboration 1Beijing Institute of Technology, Beijing, China, 2Sense Time Research, Shenzhen, China 3Nanjing University of Science and Technology, Nanjing, China
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code No The paper does not provide a link or explicit statement about open-sourcing the code for the described methodology.
Open Datasets Yes We evaluate the proposed methods on four synthetic datasets: CMU PIE (Gross et al. 2010), Helen (Le et al. 2012), Celeb A (Liu et al. 2015) and Pub Fig (Kumar et al. 2009)... As for the ground truth of the sketches, we use two public face sketch dataset (Tang and Wang 2003) and (Wang and Tang 2008).
Dataset Splits No The paper mentions collecting images for training and generating a test set, but does not specify a validation split or its size.
Hardware Specification Yes We use Pytorch (Paszke et al. 2017) to train the network on a Ge Force GTX 1080 GPU.
Software Dependencies No The paper mentions 'Pytorch' and 'Adam' but does not specify version numbers for these software dependencies (e.g., PyTorch 1.x, Adam from which library version).
Experiment Setup Yes Each stage contains 60 epochs and the learning rate is 0.0002. In all experiments, the parameters λ, β and γ are set as 10, 0.01 and 0.01.