Towards Hard-pose Virtual Try-on via 3D-aware Global Correspondence Learning
Authors: Zaiyu Huang, Hanhui Li, Zhenyu Xie, Michael Kampffmeyer, qingling Cai, Xiaodan Liang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on public benchmarks and our Hard Pose test set demonstrate the superiority of our method against the SOTA try-on approaches. 4 Experiments In this section, we conduct extensive experiments to validate the effectiveness of the proposed 3D-GCL network. |
| Researcher Affiliation | Collaboration | 1Shenzhen Campus of Sun Yat-Sen University 2Byte Dance, 3 Ui T The Arctic University of Norway 4Peng Cheng Laboratory {huangzy225, xiezhy6} @mail2.sysu.edu.cn {lihh77, caiqingl} @mail.sysu.edu.cn michael.c.kampffmeyer@uit.no, xdliang328@gmail.com |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code and data are provided in the supplementary material. |
| Open Datasets | Yes | Our experiments are conducted on two open-source datasets, Deep Fashion [31] and MPV [32], which contain 52,712 and 37,723 fashion images, respectively. |
| Dataset Splits | No | To ensure a fair comparison, we follow the train/test split of [20, 22, 33] on the Deep Fashion dataset and use the original split of MPV. In this way, we get 101,622/8,564 train/test pairs for Deep Fashion and 52,236/10,544 train/test pairs for MPV. The paper specifies train/test splits but does not explicitly mention a separate validation split with quantities or percentages. |
| Hardware Specification | Yes | The proposed 3D-GCL network is implemented in Py Torch and trained with 4 Tesla V100 GPUs. |
| Software Dependencies | No | The proposed 3D-GCL network is implemented in Py Torch and trained with 4 Tesla V100 GPUs. We thank Mind Spore for the partial support of this work, which is a new deep learning computing framwork5. The paper mentions PyTorch and MindSpore but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We first train the correspondence estimation subnetwork for 20 epochs with a batch-size of 8, and then follow the settings of [20, 25] to train the try-on generator. |