Learning Neural Implicit through Volume Rendering with Attentive Depth Fusion Priors
Authors: Pengchong Hu, Zhizhong Han
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our evaluations on widely used benchmarks including synthetic and real-world scans show our superiority over the latest neural implicit methods. |
| Researcher Affiliation | Academia | Pengchong Hu Zhizhong Han Machine Perception Lab, Wayne State University, Detroit, USA pchu@wayne.edu h312h@wayne.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Please see our project page for code and data at https://machineperceptionlab.github.io/Attentive_DF_Prior/. |
| Open Datasets | Yes | We report evaluations on both synthetic datasets and real scans including Replica [64] and Scan Net [13]. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning. It mentions using scenes from Replica and ScanNet but doesn't specify how these are split into train/validation/test sets for their experiments. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | We shoot K = 1000 or 5000 rays for reconstruction or tracking from each view, and render E = 5 or 10 frames each time for fair comparison with other methods. We set λ = 0.2, λ1 = 0.5 in loss functions. We sample N = 48 points along each ray for rendering. |