SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections

Authors: Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun, Jonathan Barron, Hendrik PA Lensch, Varun Jampani

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on our new image collections and existing datasets demonstrate better view synthesis and relighting results with SAMURAI compared to existing works. In addition, explicit mesh extraction allows for seamless use of learned 3D assets in graphics applications such as object insertion in AR or games and material editing etc. Fig. 1 (right) shows some sample application results with 3D assets estimated using SAMURAI.
Researcher Affiliation Collaboration Mark Boss University of Tübingen Andreas Engelhardt University of Tübingen Abhishek Kar Google Yuanzhen Li Google Deqing Sun Google Jonathan T. Barron Google Hendrik P. A. Lensch University of Tübingen Varun Jampani Google
Pseudocode No The paper describes the approach in detail but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] We will release it.
Open Datasets No For evaluations, we created new image collections of 8 objects (each with 80 images) captured under unique illuminations and locations and a few different cameras. We refer to this dataset as the SAMURAI dataset. ... Additionally, we evaluate on 2 CC-licensed image collections from online sources of the statue of liberty and a chair. We also use the 3 synthetic and 2 real-world datasets of Ne RD [11] under varying illumination, where poses are available. Lastly, to showcase the performance with other methods, we use the 2 real-world datasets from Ne RD, which are taken under fixed illumination. In total, we evaluate SAMURAI on 17 scenes. Please refer to the supplementary material for an overview of the SAMURAI datasets along with other datasets we experimented with.
Dataset Splits No The paper specifies a test split but does not explicitly detail a validation split (e.g., percentages or counts for training, validation, and test sets).
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'Adam optimizer', 'U2-Net', 'Ne RF', 'Neural-PIL', 'BARF', and 'Ne RD', but it does not specify any version numbers for these software dependencies.
Experiment Setup Yes The networks are optimized by an Adam optimizer with a learning rate of 1e-4 and exponentially decayed by order of magnitude every 300k steps. The camera optimization is performed with a learning rate of 3e-3 and exponentially decayed by order of magnitude every 70k steps.