Neural Inverse Rendering for General Reflectance Photometric Stereo
Authors: Tatsunori Taniai, Takanori Maehara
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present a novel convolutional neural network architecture for photometric stereo (Woodham, 1980), a problem of recovering 3D object surface normals from multiple images observed under varying illuminations. ... Our method is shown to achieve the state-of-the-art performance on a challenging real-world scene benchmark. In this section we evaluate our method using a challenging real-world scene benchmark called Di Li Gen T (Shi et al., 2018). In Sec. 4.1, we show comparisons with state-of-the-art photometric stereo methods. We then more analyze our network architecture in Sec. 4.2 and weak supervision technique in Sec. 4.3. In the experiments, we use M = 96 of observed images for each scene provided by the Di Li Gen T dataset. |
| Researcher Affiliation | Academia | 1RIKEN Center for Advanced Intelligence Project (RIKEN AIP), Nihonbashi, Tokyo, Japan. |
| Pseudocode | No | The paper describes the method and network architecture in detail with diagrams and text, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper states, 'Our method is implemented in Chainer (Tokui et al., 2015)', but it does not provide any specific link or explicit statement regarding the open-sourcing of the code developed for this paper. |
| Open Datasets | Yes | In this section we evaluate our method using a challenging real-world scene benchmark called Di Li Gen T (Shi et al., 2018). |
| Dataset Splits | No | The paper describes unsupervised learning performed directly on individual test scenes without pre-training, stating, 'For each test scene, we iterate SGD updates for 1000 steps.' It does not specify traditional training/validation/test dataset splits. |
| Hardware Specification | Yes | Our method is implemented in Chainer (Tokui et al., 2015) and is run on a single n Vidia Tesla V100 GPU with 16 GB memory and 32 bit floating-point precision. |
| Software Dependencies | No | The paper mentions that the method is 'implemented in Chainer (Tokui et al., 2015)' and 'We use Adam (Kingma & Ba, 2015) as the optimizer', but no specific version numbers for these software components are provided. |
| Experiment Setup | Yes | For each test scene, we iterate SGD updates for 1000 steps. Adam s hyper-parameter α is set to α0 = 8 10 4 for first 900 iterations, and then decreased to α0/10 for last 100 iterations for fine-tuning. We use the default values for the other hyper-parameters. The convolution weights are randomly initialized by He initialization (He et al., 2015). |