PINs: Progressive Implicit Networks for Multi-Scale Neural Representations
Authors: Zoe Landgraf, Alexander Sorkine Hornung, Ricardo S Cabral
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several 2D and 3D datasets show improvements in reconstruction accuracy, representational capacity and training speed compared to baselines. |
| Researcher Affiliation | Collaboration | 1Department of Computing, Imperial College of London, London, UK. Research done during an internship at Meta, Zuerich 2Meta, Zuerich, Switzerland. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about open-source code availability or links to a code repository. |
| Open Datasets | Yes | We evaluate our method on natural image reconstruction tasks with a subset of the Image Net test dataset (Deng et al., 2009), high resolution images from the DIV2K validation dataset (Agustsson, 2017) and qualitatively, with images from the 2017 COCO validation set (Lin et al., 2014). |
| Dataset Splits | Yes | We evaluate our method on natural image reconstruction tasks with a subset of the Image Net test dataset (Deng et al., 2009), high resolution images from the DIV2K validation dataset (Agustsson, 2017) and qualitatively, with images from the 2017 COCO validation set (Lin et al., 2014). Unless mentioned otherwise, we train on a uniformly sampled subset of 50% of the image pixels for all 2D regression tasks. For 3D shape regression, we train on an average of 456k SDF samples per shape. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using the 'Adam optimiser (Kingma & Ba, 2015)' but does not specify version numbers for any software dependencies, such as programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | We use the Adam optimiser (Kingma & Ba, 2015) and a standard learning rate of 1e 3 for all experiments. Unless mentioned otherwise, we train on a uniformly sampled subset of 50% of the image pixels for all 2D regression tasks. For 3D shape regression, we train on an average of 456k SDF samples per shape. For the presented experiments, unless specified, we train with 3 levels of detail, a hidden layer size of 256 per level and σ = 15. We extend the baseline architectures to have the same number and size of hidden layers. Our final loss is defined as Lr + ωLreg and we find that a value of ω = 0.01 works well for our experiments. |