A Dataset and Model for Realistic License Plate Deblurring
Authors: Haoyan Gong, Yuzheng Feng, Zhenrong Zhang, Xianxu Hou, Jingxin Liu, Siqi Huang, Hongbin Liu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments validate the reliability of the LPBlur dataset for both model training and testing, showcasing that our proposed model outperforms other state-of-the-art motion deblurring methods in realistic license plate deblurring scenarios. |
| Researcher Affiliation | Academia | School of AI and Advanced Computing, Xi an Jiaotong-Liverpool University {haoyan.gong21, yuzheng.feng21, zhenrong.zhang21}@student.xjtlu.edu.cn, {xianxu.hou, jingxin.liu, siqi.huang, hongbin.liu}@xjtlu.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or explicitly labeled algorithm blocks. |
| Open Source Code | Yes | The dataset and code are available at https://github.com/haoyGONG/LPDGAN. |
| Open Datasets | Yes | we introduce the first large-scale license plate deblurring dataset named License Plate Blur (LPBlur)... The dataset and code are available at https://github.com/haoyGONG/LPDGAN. |
| Dataset Splits | No | The proposed LPBlur dataset is partitioned into a training set with 9,288 image pairs, and a test set with 1,000 image pairs. The paper does not explicitly state a separate validation set split. |
| Hardware Specification | Yes | All experiments are conducted on a Ge Force RTX 3090 GPU. |
| Software Dependencies | No | The paper mentions using Adam optimizer but does not provide specific version numbers for software dependencies like programming languages or deep learning frameworks. |
| Experiment Setup | Yes | The shape of multi-scale input images for LPDGAN are (112, 224, 3), (56, 112, 3), and (28, 56, 3) respectively. Random rain adding and random cutout are utilized for data augmentation. The optimizer we use is Adam [Kingma and Ba, 2014]. The batch size is set to 7. The initial learning rate is 10 4, and the linear weight decay is used after the 100th epoch. |