Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Feedback Schrödinger Bridge Matching

Authors: Panagiotis Theodoropoulos, Nikolaos Komianos, Vincent Pacelli, Guan-Horng Liu, Evangelos Theodorou

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of our FSBM in a variety of distribution matching tasks, such as Crowd Navigation, Opinion Depolarization, and Unpaired Image Translation, compared against other stateof-the-art distribution matching frameworks such as GSBM (Liu et al., 2024), DSBM (Shi et al., 2023), and Light and Optimal Schrodinger Bridge Matching (LOSBM; Gushchin et al. (2024)).
Researcher Affiliation Collaboration 1Georgia Institute of Technology 2FAIR, Meta EMAIL EMAIL
Pseudocode Yes Algorithm 1 Feedback Schrödinger Bridge Matching (FSBM)
Open Source Code No All methods, including our FSBM, are implemented in Py Torch (Paszke et al., 2019).
Open Datasets Yes We follow the setup of (Korotin et al., 2023) with the pre-trained ALAE autoencoder (Pidhorskyi et al., 2020) on the 1024 1024 FFHQ dataset (Karras et al., 2019) to perform the translation in the latent space of dimensions 512 1, enabling more efficient training and sampling.
Dataset Splits No Notably, the number of aligned images was 4% of the total dataset for the gender translation and 8% for the age translation.
Hardware Specification No Notably, Table 4 demonstrates that FSBM required only an additional 180 MB of VRAM compared to DSBM, to manage the aligned latent vectors, which is tractable for most modern GPUS, while training DSBM and FSBM for the same number of epochs requires virtually identical training duration.
Software Dependencies No All methods, including our FSBM, are implemented in Py Torch (Paszke et al., 2019). In our experiments, for the trajectories of the aligned data, we utilized the POT python package (Flamary et al., 2021), to acquire optimal pairings, and GSBM for one epoch to obtain the trajectories between the endpoints.
Experiment Setup No We follow the setup of (Korotin et al., 2023) with the pre-trained ALAE autoencoder (Pidhorskyi et al., 2020) on the 1024 1024 FFHQ dataset (Karras et al., 2019) to perform the translation in the latent space of dimensions 512 1, enabling more efficient training and sampling. All networks are trained from scratch, without utilizing any pretrained checkpoint, and optimized with Adam W (Loshchilov, 2017)