Reconstruction of Manipulated Garment with Guided Deformation Prior

Authors: Ren Li, Corentin Dumery, Zhantao Deng, Pascal Fua

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate the superior reconstruction accuracy of our method compared to previous ones, especially when dealing with large non-rigid deformations arising from the manipulations.We validate our approach on the data from the VR-Folding dataset [2], where point clouds are generated from multi-view RGBD images.
Researcher Affiliation Academia Ren Li Corentin Dumery Zhantao Deng Pascal Fua Computer Vision Lab, EPFL Lausanne, Switzerland ren.li@epfl.ch corentin.dumery@epfl.ch zhantao.deng@epfl.ch pascal.fua@epfl.ch
Pseudocode No No section or figure explicitly labeled 'Pseudocode' or 'Algorithm' is present.
Open Source Code Yes Our implementation and model weights are available at https://github.com/liren2515/GarmentFolding.
Open Datasets Yes We train our models using data from the VR-Folding [2] and CLOTH3D [55] datasets.
Dataset Splits No The paper explicitly mentions 'training and test splits' but does not specify a separate 'validation' split.
Hardware Specification Yes All the models are trained using the Adam optimizer [63] on NVIDIA A100 GPUs.
Software Dependencies No The paper mentions software components and architectures like U-Net and Adam optimizer but does not provide specific version numbers for software dependencies or libraries.
Experiment Setup Yes We train IΘ and AΦ jointly for 9000 iterations with a batch size of 50.The diffusion model is trained for 100 epochs, with a learning rate of 1e-4, a batch size of 64, and T = 1000 steps.We choose K = 128 and train G for 100 epochs, using a learning rate of 1e-4 and a batch size of 128.