Towards Robust Blind Face Restoration with Codebook Lookup Transformer
Authors: Shangchen Zhou, Kelvin Chan, Chongyi Li, Chen Change Loy
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments 4.1 Datasets 4.2 Experimental Settings and Metrics 4.3 Comparisons with State-of-the-Art Methods 4.4 Ablation Studies Extensive experimental results on synthetic and realworld datasets verify the effectiveness of our method. |
| Researcher Affiliation | Academia | Shangchen Zhou Kelvin C.K. Chan Chongyi Li Chen Change Loy S-Lab, Nanyang Technological University {s200094, chan0899, chongyi.li, ccloy}@ntu.edu.sg |
| Pseudocode | No | The paper describes the methodology in detail but does not include a formal pseudocode block or algorithm. |
| Open Source Code | Yes | The code is publicly available at: https://github.com/sczhou/Code Former. |
| Open Datasets | Yes | We train our models on the FFHQ dataset [21], which contains 70,000 high-quality (HQ) images, and all images are resized to 512 512 for training. |
| Dataset Splits | No | The paper explicitly mentions training and testing datasets, but a distinct validation dataset split or strategy is not specified within the paper's text. |
| Hardware Specification | Yes | Our method is implemented with the Py Torch framework and trained using four NVIDIA Tesla V100 GPUs. |
| Software Dependencies | No | Our method is implemented with the Py Torch framework and trained using four NVIDIA Tesla V100 GPUs. The paper mentions PyTorch but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | For all stages of training, we use the Adam [23] optimizer with a batch size of 16. We set the learning rate to 8 10 5 for stages I and II, and adopt a smaller learning rate of 2 10 5 for stage III. The three stages are trained with 1.5M, 200K, and 20K iterations, respectively. |