RoboCoDraw: Robotic Avatar Drawing with GAN-Based Style Transfer and Time-Efficient Path Optimization

Authors: Tianying Wang, Wei Qi Toh, Hao Zhang, Xiuchao Sui, Shaohua Li, Yong Liu, Wei Jing10402-10409

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments and Discussion Datasets We conducted experiments using the images from Chicago Face Dataset (CFD) (Ma, Correll, and Wittenbrink 2015).
Researcher Affiliation Academia 1Artificial Intelligence Initiative, A*STAR 2Institute of High Performance Computing, A*STAR 3Institute of Information Research, A*STAR 1 Fusionopolis Way, Connexis North Tower, 138632, Singapore
Pseudocode No The paper describes algorithms textually but does not include structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Code available at https://github.com/Psyche-mia/Avatar-GAN
Open Datasets Yes Datasets We conducted experiments using the images from Chicago Face Dataset (CFD) (Ma, Correll, and Wittenbrink 2015). For the cartoon-style avatar image dataset, considering the drawing media of robot arm is marker on the whiteboard, the avatars should be suited to artistic composition with clean, bold lines. In order to meet such a requirement, we used the Avataaars library to randomly generate diverse cartoon avatar images as our avatar dataset.
Dataset Splits No The paper states 'We randomly chose 1145 images from the CFD dataset and 852 images from generated avatar dataset to train Avatar GAN' and mentions 'test datasets' in Table 1, but does not provide specific percentages or sample counts for training, validation, and test splits.
Hardware Specification No The paper mentions the 'UR5 robotic arm' for drawing but does not provide specific details about the computing hardware (e.g., GPU model, CPU, RAM) used for model training or other computational processes.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., 'Python 3.x', 'PyTorch 1.y', 'CUDA z.w') for replicating the experiments.
Experiment Setup Yes For Avatar GAN, we used the same architectures of generator and discriminator proposed by Cycle GAN for a fair comparison...All four discriminators utilized 70 70 Patch GANs...we set α = 0.2 to encourage the generator to focus more on learning facial features. The weight λ, which controls the relative importance of consistency loss, was set to 10. The parameters used for RKGA are N = 100, r = 3, pcrossover = 0.8, pmutation = 0.5, and costlift = 30. For the local search heuristic, the threshold percentile vthres for the twolevel improvement is set by vthres = min (0.05 + 0.01c, 0.10).