Ghost on the Shell: An Expressive Representation of General 3D Shapes
Authors: Zhen Liu, Yao Feng, Yuliang Xiu, Weiyang Liu, Liam Paull, Michael J. Black, Bernhard Schölkopf
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that G-SHELL achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks, while also performing effectively for watertight meshes. 6 EXPERIMENTS AND RESULTS We compare our method to current state-of-the-art methods for non-watertight mesh reconstruction: Neural UDF [38], Ne UDF [35] and Ne AT [40]. Table 1 shows the PSNR averaged over all test views. Table 2: Chamfer distance (cm ) on Deep Fashion3D garment instances. |
| Researcher Affiliation | Academia | 1Max Planck Institute for Intelligent Systems Tübingen 2Mila Quebec AI Institute, Université de Montréal 3ETH Zürich 4University of Cambridge |
| Pseudocode | Yes | Algorithm 1 Mesh Extraction with G-SHELL |
| Open Source Code | No | Project page: gshell3d.github.io. The paper provides a project page URL, but it is not explicitly a direct link to a source-code repository, nor does the text contain an explicit statement about releasing the code for the described methodology. |
| Open Datasets | Yes | Dataset. We use Deep Fashion3D-v2 [18] to quantitatively evaluate the performance of reconstruction with G-SHELL on non-watertight meshes. ... [18] Zhu Heming, Cao Yu, Jin Hang, Chen Weikai, Du Dong, Wang Zhangye, Cui Shuguang, and Han Xiaoguang. Deep fashion3d: A dataset and benchmark for 3d garment reconstruction from single images. In ECCV, 2020. |
| Dataset Splits | No | The paper specifies 72 views for training and 200 views for testing, but it does not mention an explicit validation set or its split. |
| Hardware Specification | Yes | Along with Nvdiffrecmc, G-SHELL takes only 3 hours to fit a ground truth shape while Neural UDF, Ne UDF and Ne AT take 17.3, 16.4, and 4.3 hours, respectively. For novel-view synthesis with all images rendered with a resolution of 512 512, our method runs at 2.7 sec/img (inferring from a learned tetrahedral grid with the Nvdiffrast rasterizer [27]), while Neural UDF, Ne UDF and Ne AT run at 1.8 min/img, 1.4 min/img, and 9.7 min/img, respectively. Compared to the other methods, ours is significantly faster in both training and inference due to its highly efficient rasterization. (tested on the same machine with a single NVIDIA RTX 6000 GPU). |
| Software Dependencies | No | The paper mentions various software components and tools like Nvdiffrecmc, DMTet, Blender with Cycles engine, COLMAP, Nvdiffrast rasterizer, and Adam optimizer, but it does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | For the multi-view reconstruction with G-SHELL, we set the grid resolution to 128 for tetrahedral grids and 80 for Flexi Cubes, and train the models with 5000 iterations using a batch size of 2 views. |