GridFormer: Point-Grid Transformer for Surface Reconstruction
Authors: Shengtao Li, Ge Gao, Yudong Liu, Yu-Shen Liu, Ming Gu
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments validate that our method is effective and outperforms the state-of-the-art approaches under widely used benchmarks by producing more precise geometry reconstructions. |
| Researcher Affiliation | Academia | Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing, China School of Software, Tsinghua University, Beijing, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/list17/GridFormer. |
| Open Datasets | Yes | We use Shape Net (Chang et al. 2015) for object-level reconstruction evaluation. For our scene-level reconstruction, we use the Synthetic Rooms dataset (Peng et al. 2020) and Scan Net-v2 (Dai et al. 2017). |
| Dataset Splits | Yes | We utilize the same train/validation/test division as previously established. |
| Hardware Specification | No | The paper mentions 'GPU Memory (MiB)' in Table 8, but does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper states 'We implement our model in Pytorch (Paszke et al. 2019) and use the Adam optimizer (Kingma and Ba 2014),' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The learning rate is 10^-4 at the first stage, and 10^-6 at the finetune stage. The depth of our U-Net-like encoder is 4, and we do not downsample or upsample the grid features in the two top levels the same as (Wang et al. 2023). The radius r to search for the opposite points is set to 0.08 and the margin m is set to 2.0. |