Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Geodesic-HOF: 3D Reconstruction Without Cutting Corners
Authors: Ziyun Wang, Eric A. Mitchell, Volkan Isler, Daniel D. Lee2844-2851
AAAI 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that taking advantage of these learned lifted coordinates yields better performance for estimating surface normals and generating surfaces than using point cloud reconstructions alone. In this section, we demonstrate the utility of Geo-HOF in several 3D reconstruction settings. First, we show that Geodesic HOF is able to reconstruct 3D objects accurately while learning the geodesic distance. On the Shape Net (Chang et al. 2015) dataset, Geodesic-HOF performs competitively in terms of Chamfer distance (Table 1) and in normal consistency (Table 2) compared against the current state of the art 3D reconstruction methods. |
| Researcher Affiliation | Industry | Samsung AI Center, New York, NY 10011 EMAIL |
| Pseudocode | No | The paper describes the network architecture and loss functions, but does not provide any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing open-source code or provide a link to a code repository for the described methodology. |
| Open Datasets | Yes | On the Shape Net (Chang et al. 2015) dataset, Geodesic-HOF performs competitively in terms of Chamfer distance (Table 1) and in normal consistency (Table 2) compared against the current state of the art 3D reconstruction methods. For fair comparison, we use the data split provided in (Choy et al. 2016b). |
| Dataset Splits | No | The paper mentions using |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using |
| Experiment Setup | Yes | We use the Adam Optimizer (Kingma and Ba 2015) with learning rate 1e-5. Practically, we choose λG and λC to be 0.1 and 1.0 respectively. |