SDGAN: Disentangling Semantic Manipulation for Facial Attribute Editing

Authors: Wenmin Huang, Weiqi Luo, Jiwu Huang, Xiaochun Cao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate our method on the Celeb A-HQ database, providing both qualitative and quantitative analyses. Our results establish that SDGAN significantly outperforms state-of-the-art techniques, showcasing the effectiveness of our approach.
Researcher Affiliation Academia 1 School of Computer Science and Engineering, Sun Yat-sen University, China 2 Shenzhen Key Laboratory of Media Security, Shenzhen University, China 3 School of Cyber Science and Technology, Sun Yat-sen University, China
Pseudocode No The paper describes its methodology and training objectives in textual form and through mathematical equations, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes The code implementing our model is available at https://github.com/sysuhuangwenmin/SDGAN.
Open Datasets Yes Like previous methods (Li et al. 2021b; Pehlivan, Dalva, and Dundar 2023), we evaluate our method on Celeb A-HQ (Karras et al. 2018), which comprises 30,000 facial images with attribute annotations.
Dataset Splits No Following (Li et al. 2021b), we split Celeb A-HQ into a test set of 3,000 images and a training set of 27,000 images. The paper explicitly mentions training and test splits but does not provide specific details for a separate validation set.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or CUDA versions) required to replicate the experiment.
Experiment Setup No The paper describes the overall framework and loss functions but does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) in the main text.