QFormer: An Efficient Quaternion Transformer for Image Denoising
Authors: Bo Jiang, Yao Lu, Guangming Lu, Bob Zhang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate that the proposed QFormer produces state-of-the-art results in both denoising performance and efficiency. |
| Researcher Affiliation | Academia | Bo Jiang1 , Yao Lu2 , Guangming Lu2 and Bob Zhang3 1College of Mechanical and Electronic Engineering, Northwest A&F University, China 2Department of Computer Science, Harbin Institute of Technology at Shenzhen, China 3Department of Computer and Information Science, University of Macau, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described, such as a specific repository link or explicit code release statement. |
| Open Datasets | Yes | Table 1: Average PSNR and SSIM of the denoised real images from Nam, Poly U and SIDD datasets. |
| Dataset Splits | No | The paper mentions using Nam, Poly U, and SIDD datasets but does not provide specific details on train/validation/test splits, exact percentages, or sample counts for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models or processor types) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions architectural parameters like feature channels (C=16, 32, 44) and QTB depth (N=2), but does not provide specific experimental setup details such as learning rates, batch sizes, or optimizer settings. |