AE-OT: A NEW GENERATIVE MODEL BASED ON EXTENDED SEMI-DISCRETE OPTIMAL TRANSPORT

Authors: Dongsheng An, Yang Guo, Na Lei, Zhongxuan Luo, Shing-Tung Yau, Xianfeng Gu

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In order to validate that the proposed method can solve the mode collapse/mixture problems and generate controllable high quality images, several experiments are conducted.
Researcher Affiliation Academia 1Department of Computer Science, Stony Brook University 2DUT-RU ISE, Dalian University of Technology 3Department of Mathematics, Harvard University
Pseudocode Yes Algorithm 1 Semi-discrete OT Map
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing their source code for the described methodology, nor does it provide a direct link to a code repository. A mention of 'corresponding images can be downloaded here' is for results/visualizations, not code.
Open Datasets Yes In the last experiment, we run the proposed method mainly on 4 public datasets, MNIST Le Cun & Cortes (2010), MNIST-FANSION Han Xiao & Vollgraf (2017), CIFAR-10 Krizhevsky (2009) and Celeb A Zhang et al. (2018), just like the authors of Hoshen & Malik (2019) Sajjadi et al. (2018) Lucic et al. (2018) did in their papers.
Dataset Splits No The paper mentions using 'test sets' for evaluation but does not provide specific train/validation/test split percentages, sample counts, or refer to predefined splits with citations for reproducibility of data partitioning.
Hardware Specification No The paper mentions 'GPU-based convex optimization algorithm' but does not specify any exact GPU models (e.g., NVIDIA A100), CPU models, or memory amounts used for the experiments. General terms like 'on GPUs' are not sufficient to describe hardware specifications.
Software Dependencies No The paper describes network architectures (e.g., kernel size, stride, activation functions) and mentions algorithms like Adam, but it does not provide specific software names with version numbers (e.g., Python 3.8, PyTorch 1.9, CUDA 11.1, TensorFlow 2.x) that are necessary for reproducible software dependencies.
Experiment Setup Yes Besides, the architecture of the decoder is the same as Lucic et al. (2018), in which the authors did a large-scale study to evaluated the best performance of 8 different generative models including various GAN models and VAE, and the encoder is set to be the mirror of decoder. ... We use the architecture shown in table 3 and table 4, with the decoder architecture same as the consistent generator architectures in GANs, and encoder having mirrored architecture.