Ultrafast Photorealistic Style Transfer via Neural Architecture Search
Authors: Jie An, Haoyi Xiong, Jun Huan, Jiebo Luo10443-10450
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on both image and video transfer. The results show that our method can produce favorable results while achieving 20-30 times acceleration in comparison with the existing state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | Jie An, 1 Haoyi Xiong, 2 Jun Huan,3 Jiebo Luo1 1University of Rochester, 2Baidu Research, 3Styling AI Inc. |
| Pseudocode | No | The paper describes the proposed method using figures and prose, but does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | All the source code will be made released in the future. |
| Open Datasets | Yes | The decoder (without transfer modules) is trained on MS COCO dataset (Lin et al. 2014) to invert deep features of the encoder back to images. |
| Dataset Splits | Yes | Given the MS COCO as the training dataset and a validation dataset with 40 content and style photos, we first train Photo Net as the Supervisory Oracle for the subsequent architecture search. ... Given a validation dataset contains 73 content and style photo pairs, we quantitatively evaluate the performance of the proposed and state-of-the-art methods by computing the above-mentioned metrics on this validation set. |
| Hardware Specification | Yes | All approaches are tested on the same computing platform which includes an NVIDIA P100 GPU card with 16GB RAM. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper mentions that the encoder is fixed and the decoder is trained for image reconstruction, and that hyperparameters (α, β, γ) are used for trade-off in the search objective, but it does not provide their specific values or other detailed hyperparameters like learning rate, batch size, or optimizer settings. |