Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Accelerating Diffusion Models for Inverse Problems through Shortcut Sampling

Authors: Gongye Liu, Haoze Sun, Jiayi Li, Fei Yin, Yujiu Yang

IJCAI 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimentally, we demonstrate SSD s effectiveness on multiple representative IR tasks. Our method achieves competitive results with only 30 NFEs compared to state-of-theart zero-shot methods(100 NFEs) and outperforms them with 100 NFEs in certain tasks.
Researcher Affiliation Academia Gongye Liu , Haoze Sun , Jiayi Li , Fei Yin , Yujiu Yang Tsinghua University EMAIL, EMAIL
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Code is available at https://github.com/Gongye Liu/SSD.
Open Datasets Yes To evaluate the performance of SSD, we conduct experiments on two datasets with different distribution characters: Celeb A 256 256 [Karras et al., 2017] for face images and Image Net 256 256 [Deng et al., 2009] for natural images, both containing 1k validation images independent of the training dataset.
Dataset Splits Yes To evaluate the performance of SSD, we conduct experiments on two datasets with different distribution characters: Celeb A 256 256 [Karras et al., 2017] for face images and Image Net 256 256 [Deng et al., 2009] for natural images, both containing 1k validation images independent of the training dataset.
Hardware Specification Yes All of our experiments are conducted on a single NVIDIA RTX 2080Ti GPU.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python, PyTorch versions) were mentioned in the paper.
Experiment Setup No The paper describes general experimental settings like datasets and evaluation metrics, but does not provide specific hyperparameters such as learning rate, batch size, or optimizer settings.