Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Efficient Online Training for Zero-Shot Time-Lapse Microscopy Denoising and Super-Resolution

Authors: Ruian He, Ri Cheng, Xinkai Lyu, Weimin Tan, Bo Yan

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic and real-world noise demonstrate that our method achieves state-of-the-art performance among zero-shot denoising approaches and is competitive with self-supervised methods. Notably, our method can reduce training time by up to 10x compared to the previous SOTA method.
Researcher Affiliation Academia Ruian He, Ri Cheng, Xinkai Lyu, Weimin Tan*, Bo Yan* School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University
Pseudocode No The paper describes methods verbally and with mathematical formulations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes CTC (Maˇska et al. 2023) has time-lapse microscopy videos of various types of cells and nuclei shot through multiple microscopes. We select a clean dataset, Ph C-C2DHU373, and add synthetic noise for evaluating the quantitative performance. [...] We conduct experiments on two video datasets (T4 and EGFP) from the Deep Semi paper (Zhang et al. 2023) [...] We also conducted experiments on a single noisy image from ZSDeconv paper (Qiao et al. 2024).
Dataset Splits No The paper uses zero-shot and self-supervised approaches which often train on the entire available (noisy) data or single frames. It mentions selecting consecutive frames and adding synthetic noise, but does not specify explicit training/validation/test dataset splits with percentages, counts, or predefined splits.
Hardware Specification Yes All time costs are tested on a RTX 3090 GPU.
Software Dependencies No We implement our framework with Pytorch. The paper mentions Pytorch but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes We use a UNet (Ronneberger, Fischer, and Brox 2015) with 17 convolutional layers and a Pixel Shuffle (Shi et al. 2016) layer at the end. The number of input channels is 5 (neighbor 5 frames concatenated), and the number of output channels is 1. The input frames are randomly cropped into 128 128 patches when training. The optimizer is Adam (Kingma and Ba 2014), and the learning rate is set to 3e-4. The final loss function is a weighted sum of the two losses: L = LLR + γLSR, and we set γ = (2 Epoch)/Total Epochs, which gradually increase during training. For our experiments, α = 0.9 is the best choice with the Ph C dataset.