Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Deep Momentum Multi-Marginal Schrödinger Bridge

Authors: Tianrong Chen, Guan-Horng Liu, Molei Tao, Evangelos Theodorou

NeurIPS 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test DMSB on 2D synthetic datasets and real-world sc RNA-seq dataset [30]. We choose state of the art algorithms MIOFlow [9] and NLSB [11] as our baselines. Our algorithm outperforms baselines significantly, as evidenced by experiments for synthetic datasets and a real-world single-cell RNA sequence dataset.
Researcher Affiliation Academia Tianrong Chen, Guan-horng Liu, Molei Tao, Evangelos A. Theodorou Georgia Institute of Technology, USA EMAIL
Pseudocode Yes Algorithm 1 Sampling Procedure of DMSB, Algorithm 2 Langevin Sampler at ti marginal constraint, Algorithm 3 DMSB Training, Algorithm 4 Function Opt Sub Set (Optimization for subsets)
Open Source Code No The paper does not explicitly state that its code is open-source or provide a link to a code repository. It mentions using 'official implementation' for baselines but not for its own work.
Open Datasets Yes We test DMSB on 2D synthetic datasets and real-world sc RNA-seq dataset [30]. ... We inherit processed data from [8].
Dataset Splits No For sc RNA-seq dataset, we split data into train and test subsets(85% and 15%). The paper does not explicitly state a separate validation split percentage.
Hardware Specification Yes The network adopts position encoding and is trained with Adam W[45] on one Nvidia 3090 Ti GPU.
Software Dependencies No The paper mentions 'Adam W[45]' but does not specify version numbers for any software libraries or dependencies, such as Python, PyTorch, or CUDA.
Experiment Setup Yes The learning rate for all the datasets is set to be 2e-4 and the training batching size is 256. ... The hyperparameters can be found in Table.7. [Referring to Figure 7, which lists 'time steps', '# BI', 'g(t)', '# Parameters', 'T', 'SNR', '# vt Langevin' for different datasets.]