CircNet: Meshing 3D Point Clouds with Circumcenter Detection
Authors: Huan Lei, Ruitao Leng, Liang Zheng, Hongdong Li
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To validate the proposed method, we train the detection neural network on the ABC dataset (Koch et al. 2019). The trained model is evaluated on the ABC and other datasets, including FAUST (Bogo et al. 2014), MGN (Bhatnagar et al. 2019), and Matterport3D (Chang et al. 2017). The method not only reconstructs meshes in high quality, but also outperforms the previous learning-based approaches largely in efficiency. |
| Researcher Affiliation | Academia | Huan Lei, Ruitao Leng, Liang Zheng, Hongdong Li School of Computing, The Australian National University |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and trained models are provided at https://github.com/Ruitao-L/Circ Net. |
| Open Datasets | Yes | To validate the proposed method, we train the detection neural network on the ABC dataset (Koch et al. 2019). The trained model is evaluated on the ABC and other datasets, including FAUST (Bogo et al. 2014), MGN (Bhatnagar et al. 2019), and Matterport3D (Chang et al. 2017). |
| Dataset Splits | No | The paper states "we apply a train/test split of 25%/75%" for the ABC dataset, but does not explicitly provide details for a separate validation split or its proportion. |
| Hardware Specification | No | The paper mentions "inference time on the same machine" but does not provide specific details about the hardware used (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions "C implementations with Python interface" but does not specify version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | Here η0 is a hyperparameter controlling the spatial resolution of each KNN patch. [...] Eventually, the multi-task loss function of our neural network is formulated as L = L1+λL2, where λ is a hyperparameter for balancing the different loss terms. [...] we find that two predictions per anchor cell (i.e. s = 2) reaches a good balance between performance and efficiency. |