G2L-CariGAN: Caricature Generation from Global Structure to Local Features

Authors: Xin Huang, Yunfeng Bai, Dong Liang, Feng Tian, Jinyuan Jia

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we compare our G2L-Cari GAN to state-of-the-art methods and evaluate its performance.
Researcher Affiliation Academia 1Tongji University 2Duke Kunshan University {huangxin0124, 2131480, sse liangdong, jyjia}@tongji.edu.cn, feng.tian978@dukekunshan.edu.cn
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement or a link for the open-source code of their proposed method.
Open Datasets Yes To train the style transfer module Ts, we use Webcaricature (Huo et al. 2018), which is a large unpaired photo-caricature dataset consisting of 6042 caricatures and 5974 photos from 252 persons in total.
Dataset Splits No The paper mentions training and testing but does not explicitly provide details about validation dataset splits (e.g., percentages, sample counts, or citations to predefined validation splits).
Hardware Specification Yes We use an RTX 3060 GPU for all experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks.
Experiment Setup Yes In Eq. 3, we set λcon = 0.01 and λsty = 20. In Eq. 9, we set λrec = 1, λc = 1, λcariid = 0.01. wr can be set to different values to control the degree of exaggeration. The learning rate, number of epochs, and batch size are as 0.01, 2000, and 1, respectively.