ExprGAN: Facial Expression Editing With Controllable Expression Intensity
Authors: Hui Ding, Kumar Sricharan, Rama Chellappa
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Quantitative and qualitative evaluations on the widely used Oulu-CASIA dataset demonstrate the effectiveness of Expr GAN. |
| Researcher Affiliation | Collaboration | Hui Ding,1 Kumar Sricharan,2 Rama Chellappa3 1,3University of Maryland, College Park 2PARC, Palo Alto |
| Pseudocode | No | The paper describes algorithms and training steps in paragraph form, but does not include any clearly labeled "Pseudocode" or "Algorithm" blocks or figures. |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is publicly available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We evaluated the proposed Expr GAN on the widely used Oulu-CASIA (Zhao et al. 2011) dataset. |
| Dataset Splits | Yes | Training and testing sets are divided based on identity, with 1296 for training and 144 for testing. |
| Hardware Specification | No | The paper does not specify the hardware used for training or experimentation, such as specific GPU or CPU models. It only mentions the use of Tensorflow. |
| Software Dependencies | No | The paper mentions "Tensorflow (Abadi et al. 2016)" but does not specify its version number or any other software dependencies with version numbers, such as programming languages or libraries. |
| Experiment Setup | Yes | We train the networks using the Adam optimizer (Kingma and Ba 2014), with learning rate of 0.0002, β1 = 0.5, β2 = 0.999 and mini-batch size of 48. In the image refining stage, we empirically set λ1 = 1, λ2 = 1, λ3 = 0.01, λ4 = 0.01, λ5 = 0.001. |