Differentiable rendering with perturbed optimizers
Authors: Quentin Le Lidec, Ivan Laptev, Cordelia Schmid, Justin Carpentier
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction. By providing informative gradients that can be used as a strong supervisory signal, we demonstrate the benefits of perturbed renderers to obtain more accurate solutions when compared to the state-of-the-art alternatives using smooth gradient approximations. |
| Researcher Affiliation | Academia | Inria Département d Informatique de l École normale supérieure, PSL Research Un iversity |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | Our implementation is based on Pytorch3d [31] and will be publicly released upon publication. |
| Open Datasets | Yes | We use the network architecture proposed in [23] and the subset of the Shapenet dataset [7] from [16]. [7] A. X. Chang, T. Funkhouser, L. Guibas, P. Hanrahan, Q. Huang, Z. Li, S. Savarese, M. Savva, S. Song, H. Su, et al. Shapenet: An information-rich 3d model repository. ar Xiv preprint ar Xiv:1512.03012, 2015. |
| Dataset Splits | No | The paper mentions 'training' and 'test set' but does not explicitly provide details about training/validation/test splits, nor does it specify any percentages or sample counts for these splits within its own text. |
| Hardware Specification | Yes | each optimization problem taking about 1 minute to solve on a Nvidia RTX6000 GPU. |
| Software Dependencies | No | Our implementation is based on Pytorch3d [31], but no specific version number for Pytorch3d or other software dependencies is provided. |
| Experiment Setup | Yes | For this task, we use Adam [17] with parameters β1 = 0.9 and β2 = 0.999 and operate with 128 128 RGB images. For training, we use Adam algorithm with a learning rate of 10 4 and parameters β1 = 0.9, β2 = 0.999. The training is done by minimizing the following loss: L = λsil Lsil + λRGBLRGB + λlap Llap. |