CycleEmotionGAN: Emotional Semantic Consistency Preserved CycleGAN for Adapting Image Emotions

Authors: Sicheng Zhao, Chuang Lin, Pengfei Xu, Sendong Zhao, Yuchen Guo, Ravi Krishna, Guiguang Ding, Kurt Keutzer2620-2627

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments are conducted on the Art Photo and FI datasets, and the results demonstrate that Cycle Emotion GAN significantly outperforms the state-of-the-art UDA approaches.
Researcher Affiliation Collaboration University of California, Berkeley, USA, Harbin Institute of Technology, China Didi Chuxing, China, Cornell University, USA, Tsinghua University, China
Pseudocode Yes Algorithm 1: Adversarial training procedure of the proposed Cycle Emotion GAN model
Open Source Code No The paper does not provide an explicit statement about open-sourcing the code or a link to a code repository for the methodology described.
Open Datasets Yes Extensive experiments are conducted on the Art Photo and FI datasets, and the results demonstrate that Cycle Emotion GAN significantly outperforms the state-of-the-art UDA approaches. The Artistic (Art Photo) dataset (Machajdik and Hanbury 2010). The Flickr and Instagram (FI) dataset (You et al. 2016).
Dataset Splits No The paper mentions training and testing data but does not provide specific details on how the datasets were split into training, validation, and test sets, such as percentages, counts, or references to predefined splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper mentions specific models and optimizers used (e.g., ResNet101, Adam solver, Cycle GAN architecture, Patch GANs) but does not provide version numbers for general software dependencies like programming languages or deep learning frameworks.
Experiment Setup Yes α in Eq. (4) is set to 10 in all experiments as in (Zhu et al. 2017a). λ in Eq. (6) is set to 10 and 5 for SKL and Midels wheel definitions of d( , ), respectively. We use the Adam solver with a batch size of 1. All generator and discriminator networks are trained from scratch with a learning rate of 0.0002, and the classifier F is trained with a learning rate of 0.0001.