Light Sampling Field and BRDF Representation for Physically-based Neural Rendering

Authors: Jing Yang, Hanyuan Xiao, Wenbin Teng, Yunxuan Cai, Yajie Zhao

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments showcase the quality and efficiency of our PBR face skin shader, indicating the effectiveness of our proposed lighting and material representations. Experiments show that our Light Sampling Field is robust enough to learn illumination by local geometry.
Researcher Affiliation Academia Jing Yang , Hanyuan Xiao , Wenbin Teng , Yunxuan Cai , Yajie Zhao Institute for Creative Technologies University of Southern California {jyang,hxiao,wteng,ycai,zhao}@ict.usc.edu
Pseudocode No The paper does not contain a pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets No Our training dataset is composed of a synthetic image dataset and a Lightstage-scanned image dataset. In synthetic dataset, we used a professionally-tuned Maya face shader to render 40-view colored images under all combinations between 21 face assets and 101 HDRI+86 OLAT illumination. Lightstage-scan dataset consists of 16-view captured colored images of 48 subjects in 27 expressions under white illumination. The paper describes its training datasets as custom-generated/captured and does not provide public access (e.g., links or citations for public availability) to these specific datasets.
Dataset Splits No The paper describes the composition of its training dataset and discusses testing on other datasets, but it does not specify explicit train/validation/test splits (e.g., percentages or counts) for its primary datasets used in the experiments.
Hardware Specification Yes In our application, MLP modules can converge in 50, 000 iterations (2.6 hour) on a single Tesla V100
Software Dependencies No The paper mentions general software like Maya and specific libraries/models like NeRF, but does not provide specific version numbers for any software dependencies required to reproduce the experiments (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes In order to construct the density field σ, we set ασ and δ to be 10 and 0.5, respectively. In the constructed radiance field, to sample rays, we draw 1024 random rays per batch. Along each ray, we sample 64 points for the shading model. The length of encoded position and view direction is 37 and 63 respectively in the material network and the Light Sampling Field network. importance light sampling takes 800 light samples z R3 from the HDRI input for direct lighting. We further downsample the HDRI map to 100 150 resolution and project it to a sphere. Each pixel on the map is considered an input lighting source. We use the direction and color of each pixel as the lighting embedding to feed into a light field sampling network for inference of coefficients of local SH. We use an 8-layer MLP with 256 neurons in each layer for both networks. MLP modules can converge in 50, 000 iterations (2.6 hour) on a single Tesla V100.