Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising

Authors: Yaochen Xie, Zhengyang Wang, Shuiwang Ji

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyze our proposed Noise2Same both theoretically and experimentally. The experimental results show that our Noise2Same consistently outperforms previous self-supervised denoising methods in terms of denoising performance and training efficiency.
Researcher Affiliation Academia Yaochen Xie Texas A&M University College Station, TX 77843 ethanycx@tamu.edu Zhengyang Wang Texas A&M University College Station, TX 77843 zhengyang.wang@tamu.edu Shuiwang Ji Texas A&M University College Station, TX 77843 sji@tamu.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We evaluate our Noise2Same on four datasets, including RGB natural images (Image Net ILSVRC 2012 Val [21]), generated hand-written Chinese character images (Hàn Zì [1]), physically captured 3D microscopy data (Planaria [27]) and grey-scale natural images (BSD68 [15]).
Dataset Splits No The paper mentions training and testing, but does not provide specific details on dataset splits for training, validation, and testing within the main text. It refers to appendices for detailed settings, which are not provided in the prompt.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for experiments, beyond mentioning 'a single GPU' in the context of batch sizes.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) that would be needed to replicate the experiment.
Experiment Setup Yes By default, we set λinv = 2 according to Theorem 1. In some cases, setting λinv to different values according to the scale of observed Linv during training could help achieve a better denoising performance. ... Specifically, we adjust the batch sizes for each method to fill the memory of a single GPU, namely, 128 for Noise2Self, 64 for Noise2Same and 32 for Laine et al.