r-BTN: Cross-Domain Face Composite and Synthesis From Limited Facial Patches
Authors: Yang Song, Zhifei Zhang, Hairong Qi
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted to demonstrate the superior performance from r BTN as compared to existing potential solutions. |
| Researcher Affiliation | Academia | Yang Song, Zhifei Zhang, Hairong Qi Department of Electrical Engineering and Computer Science University of Tennessee, Knoxville, TN 37996, USA {ysong18, zzhang61, hqi@utk.edu} |
| Pseudocode | No | The paper describes the algorithm using textual descriptions and mathematical equations along with flowcharts, but does not include a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper mentions 'Details are shown in supplementary materials' but does not include an explicit statement about the release of its source code or a link to a code repository. |
| Open Datasets | Yes | We collect 1,577 face/sketch pairs from the datasets CUHK (Wang and Tang 2009), CUFSF (Zhang, Wang, and Tang 2011), AR (Martinez and Benavente 2007), FERET (Phillips et al. 2000), and IIIT-D (Bhatt et al. 2012). ... We collect frontal face images with uniform background and controlled illumination from datasets CFD (Ma, Correll, and Wittenbrink 2015), Siblings DB (Vieira et al. 2014), and PUT (Kasinski, Florek, and Schmidt 2008)... |
| Dataset Splits | No | The paper mentions 3,126 face/sketch pairs and 300 pairs for testing, but does not provide explicit details about training/validation/test splits beyond that, such as percentages or specific counts for all splits. |
| Hardware Specification | No | The paper describes implementation details but does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper mentions the use of 'ADAM (Kingma and Ba 2014)' but does not list specific software dependencies with version numbers (e.g., programming languages, deep learning frameworks, or libraries). |
| Experiment Setup | Yes | In the training, we adopt ADAM (Kingma and Ba 2014) (α = 0.0002, β = 0.5). ... The parameter λ in Eq. 3 is set to be 100. Details are shown in supplementary materials. After 100 epochs, we could achieve the results as shown in this paper. |