Greatness in Simplicity: Unified Self-Cycle Consistency for Parser-Free Virtual Try-On
Authors: Chenghu Du, junyin Wang, Shuqing Liu, Shengwu Xiong
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that our method achieves state-of-the-art performance on a popular virtual try-on benchmark. We conduct experiments using the VITON dataset [8] |
| Researcher Affiliation | Academia | 1Wuhan University of Technology, 2Shanghai AI Laboratory 3Sanya Science and Education Innovation Park, Wuhan University of Technology 4Wuhan Textile University, 5Qiongtai Normal University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a project page link (https://du-chenghu.github.io/USC-PFN/) but does not explicitly state that the source code is provided there or through any other means within the paper itself. |
| Open Datasets | Yes | We conduct experiments using the VITON dataset [8] |
| Dataset Splits | No | The paper mentions 'training set' and 'test set' but does not explicitly state details for a validation set or its split. |
| Hardware Specification | Yes | The USC-PFN is implemented in Py Torch and trained on a single Nvidia Tesla V100 GPU running Ubuntu 16.04. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Ubuntu 16.04' but does not provide specific version numbers for these or other ancillary software dependencies. |
| Experiment Setup | Yes | During training, a batch size of 16 is used for 100 epochs, and the Adam optimizer [29] is employed with parameters β1 = 0.5 and β2 = 0.999, and the initial learning rate is set to 1e 4 with linear decay after 50 epochs. ... In the loss functions, the λr = 20 and λp = 0.25 in the Lngd. The λscyc = 1, λG adv = 0.1, λsr = 50, λgr = 1, and λcp = 10 in the Lt sig. |