Few-Shot Unsupervised Implicit Neural Shape Representation Learning with Spatial Adversaries
Authors: Amine Ouasfi, Adnane Boukhayma
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments and evaluations, we illustrate the efficacy of our proposed method, highlighting its capacity to improve SDF learning with respect to baselines and the state-of-the-art using synthetic and real data. |
| Researcher Affiliation | Academia | 1Inria, Univ. Rennes, CNRS, IRISA, M2S, France. |
| Pseudocode | Yes | Algorithm 1 The training procedure of our method. |
| Open Source Code | No | The paper states: "Unless stated differently, we use the publicly available official implementations of existing methods." This refers to external methods, not the authors' own source code for their proposed method. No explicit statement or link for their code is provided. |
| Open Datasets | Yes | Shape Net (Chang et al., 2015), Faust (Bogo et al., 2014), 3D Scene (Zhou & Koltun, 2013), Surface Reconstruction Benchmark (SRB) (Williams et al., 2019) |
| Dataset Splits | Yes | We decide the evaluation epoch for all the methods for which we generated results (including our main baseline) in the same way: we chose the best epoch for each method in terms of chamfer distance between the reconstruction and the input point cloud. |
| Hardware Specification | Yes | We train on a NVIDIA RTX A6000 GPU. |
| Software Dependencies | No | The paper mentions "Py Torch (Paszke et al., 2019)", but does not provide a specific version number for PyTorch or any other software dependency used for the experiments. |
| Experiment Setup | Yes | We train for Nit = 40000 iterations using the Adam optimizer. We use batches of size Nb = 5000. Following NP, we set K = 51 for estimating local standard deviations σp. |