Garment4D: Garment Reconstruction from Point Cloud Sequences

Authors: Fangzhou Hong, Liang Pan, Zhongang Cai, Ziwei Liu

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental high-quality reconstruction results are qualitatively and quantitatively illustrated through extensive experiments. Codes are available at https://github.com/hongfz16/Garment4D. 3 Experiments 3.1 Datasets and Evaluation Protocols
Researcher Affiliation Collaboration 1S-Lab, Nanyang Technological University 2Sense Time Research 3Shanghai AI Laboratory {fangzhou001, liang.pan, ziwei.liu}@ntu.edu.sg caizhongang@sensetime.com
Pseudocode No The paper describes methods in prose and refers to networks, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Codes are available at https://github.com/hongfz16/Garment4D.
Open Datasets Yes We establish point cloud sequence-based garment reconstruction dataset by adapting CLOTH3D [18] for our experiments. CLOTH3D is downloaded from http://chalearnlap.cvc.uab.es/dataset/38/description/.
Dataset Splits No We split the sequences to training and testing sets at the ratio of 8 : 2.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using 'Point Net++' and adapting 'Multi-Garment Net' but does not provide specific version numbers for these or any other software components.
Experiment Setup No The paper describes the network architecture and loss functions, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text.