Combating Mode Collapse via Offline Manifold Entropy Estimation

Authors: Haozhe Liu, Bing Li, Haoqian Wu, Hanbang Liang, Yawen Huang, Yuexiang Li, Bernard Ghanem, Yefeng Zheng

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results show the effectiveness of our method which outperforms the GAN baseline, Ma F-GAN on Celeb A (9.13 vs. 12.43 in FID) and surpasses the recent state-of-the-art energy-based model on the ANIMEFACE dataset (2.80 vs. 2.26 in Inception score).
Researcher Affiliation Collaboration 1 AI Initiative, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900, Saudi Arabia 2Jarvis Lab, Tencent, Shenzhen 518057, China 3You Tu Lab, Tencent, Shenzhen 518057, China 4Shenzhen University, Shenzhen 518060, China
Pseudocode No The paper describes the methodology using text and mathematical equations, but does not contain a clearly labeled pseudocode or algorithm block.
Open Source Code Yes Code is available at https://github.com/Haozhe Liu-ST/MEE.
Open Datasets Yes Extensive experiments are carried on four publicly available datasets with different image sizes, including CIFAR-10 (32 32 pixels) (Krizhevsky, Hinton et al. 2009), ANIMEFACE (64 64), Celeb A (256 256) (Liu et al. 2015), and FFHQ (1024 1024) (Karras, Laine, and Aila 2019).
Dataset Splits No The paper states that 'All the experimental settings, such as optimizer, network architecture and learning rate, are identical to the public benchmarks', implying standard splits, but it does not explicitly provide specific dataset split information (exact percentages or sample counts) within the paper itself.
Hardware Specification Yes We implement our Ma EM-GAN using the public Py Torch toolbox on eight NVIDIA V100 GPUs.
Software Dependencies No The paper mentions using 'the public Py Torch toolbox' but does not specify its version number or other software dependencies with specific version details.
Experiment Setup No The paper states that 'All the experimental settings, such as optimizer, network architecture and learning rate, are identical to the public benchmarks', but it does not explicitly provide concrete hyperparameter values or detailed training configurations within the main text.