Replicating Active Appearance Model by Generator Network

Authors: Tian Han, Jiawen Wu, Ying Nian Wu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments to investigate whether the generator network can replicate or imitate the AAM, where the AAM serves as the teacher model and the generator network plays the role of the student model. In the learning stage, the generator network only has access to the images generated by the AAM. It does not have access to the shape and appearance variables (latent code) used by the AAM to generate the images. After learning the generator network, we investigate the relationship between the latent code of the learned generator network and the latent code of the AAM.
Researcher Affiliation Academia 1 University of California, Los Angeles 2 Beijing Institute of Technology
Pseudocode No The paper does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any specific links or explicit statements about the release of its source code.
Open Datasets No The paper states, "We pre-train the AAM using approximately 200 frontal face images with given landmarks or control points" and "To generate face stimuli for our experiments, we randomly generate 20, 000 face images from the above pre-trained AAM." However, it does not provide any concrete access information (link, DOI, citation) for these datasets.
Dataset Splits No The paper states that 20,000 images are used for training and 2,000 images are used as a testing set. It does not mention a separate validation set or provide explicit splits for all three (train, validation, test).
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions using "Adam optimizer" but does not specify any software dependencies or libraries with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We used Adam optimizer [Kingma and Ba, 2014] with initial learning rate 0.0002 for 500 iterations. The training images are also re-sized to [64, 64] to ease the computation. We tried different dimensionalities for the latent code Z, including 20, 100 and 200 dimensions. The learning rate is 0.0001 with 900 epochs.