Subsurface Scattering for Gaussian Splatting

Authors: Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik PA Lensch

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose a framework for optimizing an object s shape together with the radiance transfer field given multiview OLAT (one light at a time) data. Our method decomposes the scene into an explicit surface represented as 3D Gaussians, with a spatially varying BRDF, and an implicit volumetric representation of the scattering component. A learned incident light field accounts for shadowing. We optimize all parameters jointly via raytraced differentiable rendering. Our approach enables material editing, relighting and novel view synthesis at interactive rates. We show successful application on synthetic data and introduce a newly acquired multi-view multi-light dataset of objects in a light-stage setup. Compared to previous work we achieve comparable or better results at a fraction of optimization and rendering time while enabling detailed control over material attributes. Project page: https://sss.jdihlmann.com/
Researcher Affiliation Academia Jan-Niklas Dihlmann Arjun Majumdar Andreas Engelhardt Raphael Braun Hendrik P.A. Lensch University of Tübingen
Pseudocode No The paper describes the method and its components in detail, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block, nor does it present structured steps formatted like pseudocode.
Open Source Code No At this point in time we cannot provide open access to code and data. However, it is our clear intend to release the data acquired for this work together with all codes needed to reproduce the results and use the data for future work.
Open Datasets Yes We created a new OLAT dataset from synthetically rendered objects and real-world captured objects that capture various effects of SSS materials. and We will release more details when publishing the dataset. and Find the full dataset on our project page. and The 3D models are sourced from Blender Kit library [2].
Dataset Splits Yes In total, we have 11.200 train and 22.400 test images of 800 800 resolution for each object. and A total of approximately 25,000 images are split evenly into a train and test set by uniform sampling from the camera and light positions.
Hardware Specification Yes We use a single NVIDIA RTX 4090 GPU per run on a compute server with a total of 512 GB of RAM.
Software Dependencies No The whole pipeline is implemented with Pytorch using the provided custom CUDA kernels from [19, 10] for rendering.
Experiment Setup Yes We use the ADAM [20] optimizer with default parameters and a learning rate of 0.001 with an exponential decay of 0.99 every 1000 steps. We train for 60k steps although we observe that the model already receives good results after 30k steps.