Analytic Marching: An Analytic Meshing Solution from Deep Implicit Surface Networks
Authors: Jiabao Lei, Kui Jia
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on deep learning mesh reconstruction verify the advantages of our algorithm over existing ones. ... We present careful ablation studies in the context of deep learning mesh reconstruction. Experiments on benchmark datasets of 3D object repositories show the advantages of our algorithm over existing ones... |
| Researcher Affiliation | Academia | 1School of Electronic and Information Engineering, South China University of Technology, Guangzhou, Guangdong, China 2Pazhou Lab, Guangzhou, 510335, China. Correspondence to: Kui Jia <kuijia@scut.edu.cn>. |
| Pseudocode | No | The paper describes the steps of the 'analytic marching' algorithm in numbered paragraphs (Section 4) but does not provide a formal pseudocode block or an explicitly labeled 'Algorithm' section. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We use five categories of Rifle , Chair , Airplane , Sofa , and Table from the Shape Net Core V1 dataset (Chang et al., 2015), 200 object instances per category, for evaluation of different meshing algorithms. |
| Dataset Splits | No | The paper mentions training an MLP based SDF and evaluation metrics, but it does not explicitly provide details about specific train/validation/test dataset splits (e.g., percentages, sample counts, or defined subsets) for its own experiments. |
| Hardware Specification | Yes | the current algorithm is simply implemented on a CPU (Intel E5-2630 @ 2.20GHz)... implemented on a GPU (Tesla K80) |
| Software Dependencies | No | The paper does not provide specific details about ancillary software dependencies, such as programming languages or library versions, that would be needed for replication. |
| Experiment Setup | Yes | Our training hyperparameters are as follows. The learning rates start at 1e-3, and decay every 20 epochs by a factor of 10, until the total number of 60 epochs. We set weight decay as 1e-4 and the penalty in (15) as α = 0.01. |