Pseudo-Siamese Blind-spot Transformers for Self-Supervised Real-World Denoising
Authors: Yuhui Quan, Tianxiang Zheng, Hui Ji
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments |
| Researcher Affiliation | Academia | Yuhui Quan , Tianxiang Zheng School of Computer Science and Engineering South China University of Technology Hui Ji Department of Mathematics National University of Singapore |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | Our work is implemented on Py Torch1.10 and CUDA 11.8, which will be released upon paper acceptance. |
| Open Datasets | Yes | Three widely-used real-world datasets are used for evaluation: SIDD [13], DND [14], and NIND [64]. |
| Dataset Splits | Yes | The SIDD-Medium subset is chosen as training data, consisting of 320 noisy/clean image pairs. The validation subset, denoted by SIDD-Validation, consists of 1280 paired samples for hyper-parameter tuning and ablation study. |
| Hardware Specification | Yes | All experiments are conducted on an NVIDIA A6000 GPU. |
| Software Dependencies | Yes | Our work is implemented on Py Torch1.10 and CUDA 11.8 |
| Experiment Setup | Yes | The grid size of Self Former-D is set to image size divided by 8, and it doubles for Self Former-F. ... Self Former-D is optimized using Adam with a learning rate of 0.0001, and that of Self Former-F is doubled. Other parameters of Adam are set to default. The entire model is trained for 30 epochs for full convergence. |