Mean-field Chaos Diffusion Models

Authors: Sungwoo Park, Dongjun Kim, Ahmed Alaa

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6. Empirical Study This section provides a numerical validation of the efficacy of integrating MFT into the SGM framework, particularly in extreme scenarios of large cardinality, where previous works struggle to achieve robust performance.
Researcher Affiliation Academia 1Department of Electrical Engineering and Computer Sciences, UC Berkeley 2Department of Computer Science, Stanford 3UCSF.
Pseudocode Yes A.9.1. TRAINING MEAN-FIELD CHAOTIC DIFFUSION MODELS This section aims to present the algorithmic implementation of mean-field score matching and training procedure with objective (P3). A.9.2. SAMPLING SCHEME FOR MEAN-FIELD CHAOS DIFFUSION MODELS To sample the denoising dynamics, this work proposes a modified Euler scheme, adapted for mean-field interacting particle systems (Bossy & Talay, 1997; dos Reis et al., 2022), and approximate the stochastic differential equations in the mean-field limit. The proposed scheme involves a four-step sampling procedure.
Open Source Code No No explicit statement or link regarding the release of the source code for the methodology described in this paper was found.
Open Datasets Yes Datasets. This paper utilizes Shape Net, a widely recognized dataset comprising a vast collection of 3D object models across multiple categories, and Med Shape Net, a curated collection of medical shape data designed for advanced imaging analysis. 1. Shape Net. (Chang et al., 2015) ... 2. Med Shape Net. (Li et al., 2023)
Dataset Splits No No explicit train/test/validation dataset splits with specific percentages or counts were found.
Hardware Specification Yes All experiments were conducted using a setup of 4 NVIDIA A100 GPUs.
Software Dependencies No No specific software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8) were explicitly mentioned in the paper.
Experiment Setup Yes Table 4. Hyperparameters according to cardinality in data instances. Learning Rate 1.0e 3 1.0e 4 (VP SDE) σ2 t = βt, βt = βmin + t(βmax βmin), βmax = 20.0, βmin = 0.1 (Diffusion Steps) K {1, , 300}, |K| = 300 (Branching Ratio) b 2 (Branching Steps) K {100, 200} {50, 100, 150, 200} {50, 100, 150, 200, 250} (Initial Cardinality) {N0} 250 625 1250 3125 (Interaction Degree) k 10 3 3 3