CycleVTON: A Cycle Mapping Framework for Parser-Free Virtual Try-On

Authors: Chenghu Du, Junyin Wang, Yi Rong, Shuqing Liu, Kai Liu, Shengwu Xiong

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on challenging benchmarks demonstrate that our proposed method exhibits superior performance compared to state-of-the-art methods.
Researcher Affiliation Academia 1 School of Computer Science and Artificial Intelligence, Wuhan University of Technology, Wuhan, 430070 2 Shanghai Artificial Intelligence Laboratory, Shanghai 200232 3 Sanya Science and Education Innovation Park, Wuhan University of Technology, Sanya, 572000 4 School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, 430200 5 School of Information Science and Technology, Qiongtai Normal University, Haikou, 571127
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found.
Open Source Code No The paper does not provide a direct link or explicit statement about the availability of the source code for the described methodology.
Open Datasets Yes VITON We use VITON dataset (Han et al. 2018), which consists of 16,253 image groups with the resolution of 256 192. ... VITON-HD We also use VITON-HD dataset collected by (Choi et al. 2021) to demonstrate the generalization of handling high-resolution images, which comprises 13,679 image groups with the resolution of 512 384.
Dataset Splits No The dataset is split into a training set with 14,221 groups and a testing set with 2,032 groups. (for VITON). All components are the same as VITON, and are split into a training set with 11,647 groups and a testing set with 2,032 groups. (for VITON-HD). The paper only mentions training and testing splits, not an explicit validation split.
Hardware Specification Yes Our framework is implemented using Py Torch and trained on 1 Nvidia Tesla V100 GPU running Ubuntu 16.04.
Software Dependencies No The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes During training, we use the Adam W optimizer (β1 = 0.5 and β2 = 0.999) (Loshchilov and Hutter 2017) with a batch size of 1 and an initial learning rate of 1e 4. Our framework is iteratively optimized for 200 epochs, the learning rate is linearly reduced to 0 in the last 100 epochs.