Nonparametric Generative Modeling with Conditional Sliced-Wasserstein Flows
Authors: Chao Du, Tianbo Li, Tianyu Pang, Shuicheng Yan, Min Lin
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we first examine the efficacy of the proposed techniques of locally-connected projections and pyramidal schedules. We then demonstrate that with these techniques our ℓ-CSWF further enables superior performances on conditional modeling tasks, including class-conditional generation and image inpainting. We report the FID scores (Heusel et al., 2017) on CIFAR10 and Celeb A in Table 1 for quantitative evaluation. |
| Researcher Affiliation | Industry | 1Sea AI Lab, Singapore. Correspondence to: Chao Du <duchao@sea.com>, Min Lin <linmin@sea.com>. |
| Pseudocode | Yes | Algorithm 1: Conditional Sliced-Wasserstein Flow. Algorithm 2: Sliced-Wasserstein Flow (SWF) (Liutkus et al., 2019) |
| Open Source Code | Yes | Code is available at https://github.com/duchao0726/Conditionial-SWF. |
| Open Datasets | Yes | We use MNIST, Fashion-MNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009) and Celeb A (Liu et al., 2015) datasets in our experiments. |
| Dataset Splits | No | The paper mentions using training and test data, but it does not explicitly specify train/validation/test dataset splits with percentages or sample counts for reproducibility. It states 'We augment the CIFAR-10 dataset with horizontally flipped images, resulting in a total of 100000 training images' and later 'test split of each dataset'. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run its experiments. It mentions 'limited computing resources' but no model numbers or specifications. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with versions) needed to replicate the experiment. |
| Experiment Setup | Yes | For all experiments, we set H = 10000 for the number of projections in each step and set the step size η = d. The number of simulation steps K varies from 10000 to 20000 for different datasets, due to different resolutions and pyramidal schedules. For MNIST and Fashion-MNIST, we set M = 2.5 105. For CIFAR10 and Celeb A, we set M = 7 105 and M = 4.5 105, respectively. We set the amplifier ξ = 10 for all datasets. |