Light and Optimal Schrödinger Bridge Matching
Authors: Nikita Gushchin, Sergei Kholkin, Evgeny Burnaev, Alexander Korotin
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We experimentally showcase the performance of our solver in a range of practical tasks. The code for our solver can be found at https://github. com/SKholkin/Light SB-Matching. and 5. Experimental Illustrations To evaluate our new Light SB-M solver, we considered several setups from related works. |
| Researcher Affiliation | Academia | Nikita Gushchin * 1 Sergei Kholkin * 1 Evgeny Burnaev 1 2 Alexander Korotin 1 2 1Skolkovo Institute of Science and Technology 2Artificial Intelligence Research Institute. Correspondence to: Nikita Gushchin <n.gushchin@skoltech.ru>, Sergei Kholkin <s.kholkin@skoltech.ru>. |
| Pseudocode | Yes | Algorithm 1 Light SB Matching (Light SB-M) Input : plan π Π(p0, p1) accessible by samples; adjusted Schr odinger potential vθ parametrized by a gaussian mixture (θ = {αk, µk, Σk}K k=1). Output :learned drift gθ approximating the optimal g . repeat Sample batch of pairs {xn 0, xn 1}N n=0 π; Sample batch {tn}N n=0 U[0, 1]; Sample batch {xn t }N n=0 W ϵ |x0,x1; N PN n=1 ||gθ(xn t , tn) 1 1 tn (xn 1 xn t )||2; Update θ using Lθ θ ; until converged; |
| Open Source Code | Yes | The code for our solver can be found at https://github. com/SKholkin/Light SB-Matching. and The code for our solver is written in Py Torch and available at https://github. com/SKholkin/Light SB-Matching. |
| Open Datasets | Yes | We use the SB mixtures benchmark proposed by (Gushchin et al., 2023b, M4) and based on the dataset from the Kaggle competition Open Problems Multimodal Single-Cell Integration. and FFHQ dataset (Karras et al., 2019). |
| Dataset Splits | Yes | In our experiments, we consider two setups by taking data from two different days as p0, p1 to solve the Shr odinger Bridge and one intermediate day for evaluation. and According to (Korotin et al., 2024) we first split the FFHQ data into train (first 60k) and test (last 10k) images. |
| Hardware Specification | Yes | Langevin-based (Mokrov et al., 2024) [1 GPU V100] and KL minimization Light SB (Korotin et al., 2024) [4 CPU cores] and Light SB-M (ID, ours) [4 CPU cores] |
| Software Dependencies | No | No specific version numbers for PyTorch or POT library are provided. The paper states: "We build our Light SB-M implementation upon Light SB official implementation https://github.com/ngushchin/ Light SB. ... In the Mini-batch (MB) setting, discrete OT algorithm ot.emd is taken from POT library (Flamary et al., 2021). The batch size is always 128." |
| Experiment Setup | Yes | All the parametrization, optimization and initialization details are the same as (Korotin et al., 2024) if not stated otherwise. In the Mini-batch (MB) setting, discrete OT algorithm ot.emd is taken from POT library (Flamary et al., 2021). The batch size is always 128. and We use K = 250 potentials and Adam optimizer with lr = 10 3 in all the cases to train Light SB-M. and All models are trained with ϵ = 0.1 if not stated otherwise. |