Instance-Level Facial Attributes Transfer with Geometry-Aware Flow

Authors: Weidong Yin, Ziwei Liu, Chen Change Loy9111-9118

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations validate the capability of our approach in transferring instance-level facial attributes faithfully across large pose and appearance gaps. In this section, we comprehensively evaluate our approach on different benchmarks with dedicated metrics.
Researcher Affiliation Academia Weidong Yin University of British Columbia wdyin@cs.ubc.ca Ziwei Liu Chinese University of Hong Kong zwliu@ie.cuhk.edu.hk Chen Change Loy Nanyang Technological University ccloy@ntu.edu.sg
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper provides a project page link (http://mmlab.ie.cuhk.edu.hk/projects/ attribute-transfer/) but does not provide a direct link to a source-code repository (e.g., GitHub, GitLab, Bitbucket) nor an explicit statement of code release.
Open Datasets Yes Extensive evaluations on Celeb A (Liu et al. 2015) and Celeb A-HQ (Karras et al. 2017) datasets validate the effectiveness of our approach in transferring instance-level facial attributes faithfully across large pose and appearance gaps.
Dataset Splits Yes We use the standard training, validation and test splits.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions software components and architectures like Adam optimizer, Pix2Pix HD, ResNet-18, LSGAN, Patch GAN, and UNet, but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes Input image values are normalized to [ 1, 1]. All models are trained using Adam (Kingma and Ba 2014) optimizer with a base learning rate of 0.002, and a batch size of 8. We perform data augmentation by random horizontal flipping with a probability of 0.5.