Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Inverse Bridge Matching Distillation

Authors: Nikita Gushchin, David Li, Daniil Selikhanovych, Evgeny Burnaev, Dmitry Baranchuk, Alexander Korotin

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach for both conditional and unconditional types of bridge matching on a wide set of setups, including super-resolution, JPEG restoration, sketch-to-image, and other tasks, and show that our distillation technique allows us to accelerate the inference of DBMs from 4x to 100x and even provide better generation quality than used teacher model depending on particular setup. We provide the code at https: //github.com/ngushchin/IBMD. 5. Experiments This section highlights the applicability of our IBMD distillation method in both unconditional and conditional settings. To demonstrate this, we conducted experiments utilizing pretrained unconditional models used in I2SB paper (Liu et al., 2023a). Then we evaluated IBMD in conditional settings using DDBM (Zhou et al., 2024a) setup (M5.2).
Researcher Affiliation Collaboration 1Skolkovo Institute of Science and Technology 2Artificial Intelligence Research Institute 3Moscow Institute of Physics and Technology 4Yandex Research 5HSE University.
Pseudocode Yes Algorithm 1 Inverse Bridge Matching Distillation (IBMD)
Open Source Code Yes We provide the code at https: //github.com/ngushchin/IBMD.
Open Datasets Yes utilize the Edges Handbags dataset (Isola et al., 2017) with a resolution of 64 64 pixels and the DIODE-Outdoor dataset (Vasiljevic et al., 2019) with a resolution of 256 256 pixels. For these tasks, we report FID and Inception Scores (IS) (Barratt & Sharma, 2018). For the image inpainting task, we use the same setup of center-inpainting as before.
Dataset Splits Yes For the evaluation we follow the same protocol used in the I2SB paper, i.e. use the full validation subset of Image Net for super-resolution task and the 10 000 subset of validation for other tasks. ... For evaluation, we follow established benchmarks (Saharia et al., 2022; Song et al., 2023) by computing the FID on reconstructions from the full Image Net validation set, with comparisons drawn against the training set statistics. ... DIODE-Outdoor Following prior work (Zhou et al., 2023; Zheng et al., 2024; He et al., 2024), we used the DIODE outdoor dataset, preprocessed via the DBIM repository s script for training/test sets (Table 8).
Hardware Specification Yes Table 9. Training times and NFE across different tasks, teachers, and datasets. Approximate time on 8 A100
Software Dependencies No The paper mentions extending existing repositories (I2SB, DDBM) and lists their GitHub links (Table 8) but does not provide specific version numbers for software dependencies like Python, PyTorch, CUDA, or other libraries used in their implementation.
Experiment Setup Yes All hyperparameters are listed in Table 7. ... We used batch size 256 and ema decay 0.99 for setups.