Examples-Rules Guided Deep Neural Network for Makeup Recommendation
Authors: Taleb Alashkar, Songyao Jiang, Shuyang Wang, Yun Fu
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The performance of this framework is evaluated through extensive experimental analyses. The experiments validate the automatic facial traits classification, the recommendation effectiveness in statistical and perceptual ways and the makeup synthesis accuracy which outperforms the state of the art methods by large margin. |
| Researcher Affiliation | Academia | Taleb Alashkar1, Songyao Jiang1, Shuyang Wang1, and Yun Fu1,2 1Department of Electrical & Computer Engineering, 2College of Computer & Information Science, Northeastern University, Boston, MA, USA |
| Pseudocode | Yes | Algorithm 1 Example-Rules based DNN Learning |
| Open Source Code | No | The paper mentions that the dataset will be publicly available after publishing, but there is no concrete statement or link provided for the source code of the methodology. |
| Open Datasets | Yes | In our database, there are 961 different females with two images, one with clean face and another after professional makeup. This dataset will be available to the public use after publishing this work. |
| Dataset Splits | Yes | 80% pairs of images (examples) are used for training, 10% for validation and 10% for testing in 9-fold cross validation. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions software like 'Libsvm' and 'face++ framework' but does not provide specific version numbers for these or any other ancillary software dependencies. |
| Experiment Setup | Yes | Mini-batch gradient descent algorithm (Vincent et al. 2010) is used for more robust gradient descent performance with min-batch size 10. Number of epoches in the training is 100 and learning ration β = 0.1 selected empirically. The network has one input layer, 3 hidden layers each has 100 hidden units, learning rate: η = 10 4, and one output layer with 8 different outputs (softmax). |