Toward Realistic Virtual Try-on Through Landmark Guided Shape Matching
Authors: Guoqiang Liu, Dan Song, Ruofeng Tong, Min Tang2118-2126
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Qualitative and quantitative experiments on two public datasets validate the superiority of the proposed method, especially for challenging cases such as large geometric changes and complex clothes patterns. |
| Researcher Affiliation | Academia | 1Zhejiang University 2Tianjin University |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code will be available at https://github.com/lgqfhwy/LM-VTON. |
| Open Datasets | Yes | As most image-based virtual try-on methods do (Han et al. 2018; Wang et al. 2018a; Yang et al. 2020), we use the dataset collected by Han et al.(Han et al. 2018) (denoted as Zalando dataset in this paper) to comprehensively compare the proposed method with the state-of-the-art methods. ... we use another dataset MPV (Dong et al. 2019) to show the ability of the proposed method to deal with more challenging conditions. |
| Dataset Splits | No | For Zalando dataset: "There are 16,253 available pairs for training, 14,221 of which are organized as training set and the rest are used as testing set." For MPV dataset: "The selected data are split to the training set and the testing set with 16,585 and 3,344 pairs respectively." No explicit validation set is mentioned. |
| Hardware Specification | Yes | All the codes are implemented on Py Torch and run on one NVIDIA 2080Ti GPU. |
| Software Dependencies | No | The paper mentions "Py Torch" but does not specify a version number for it or any other software dependencies. |
| Experiment Setup | Yes | For LGSM module, we set λl = λl2 = λv = λm = 1 and λlm = 0.01. For SVTO module, we set λc = λ2 = 1, λv2 = 10. Each module is trained for 20 epochs with batch size 4. The learning rate is initialized to 0.0002 and the Adam optimizer is adopted with the hyper-parameter β1 = 0.5 and β2 = 0.999. |