High-Quality Real-Time Rendering Using Subpixel Sampling Reconstruction
Authors: Boyu Zhang, Hongliang Yuan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that our method significantly outperforms previous approaches in denoising quality and reduces overall time costs, enabling real-time rendering capabilities at 2K resolution. |
| Researcher Affiliation | Collaboration | Boyu Zhang1, 3, Hongliang Yuan2, 3* 1University of California, Los Angeles 2Xiaomi Cooperation 3Tencent AI Lab |
| Pseudocode | No | The paper includes architectural diagrams and describes the processes in text but does not provide any pseudocode or algorithm blocks. |
| Open Source Code | Yes | We will release the dataset and code for research purpose. |
| Open Datasets | Yes | A realistic synthesised dataset is built through our subpixel sampling ray tracer. We will release the dataset and code for research purpose. The Bistro Interior and Bistro Exterior (Lumberyard 2017) scenes contain more than one million triangles and transparency, diffuse, specular, and soft shadow effects. |
| Dataset Splits | Yes | We randomly selected 5 consecutive frames for training each scene. To maximize the utilization of the GPUs, we also randomly cropped the inputs, including the noisy image and auxiliary features, to a resolution of 256x256. We also rendered a validation set of 10 frames and a 50 frames test set for each scene. |
| Hardware Specification | Yes | To showcase the efficiency of our subpixel sampling, we test the rendering time of each stage on the NVIDIA RTX 3090 GPU at a resolution of 1024 2048, see Tab. 2. The comparison was carried out using the same frame for each scene, and the average results are presented in Tab. 3, which shows the average inference time for all six scenes at 1024 2048 and 1024 1080 resolution using an NVIDIA Tesla A100. |
| Software Dependencies | No | The paper mentions using the 'Py Torch framework' but does not specify a version number or other software dependencies with version numbers. |
| Experiment Setup | Yes | The kernel size is 3 3 at all layers. The weight coefficients for Ls, Lt, Le, Lw, and La are 0.7, 0.1, 0.2, 0.4, and 5.0, respectively. We conducted all experiments using the Py Torch framework (Paszke et al. 2019) on 8 NVIDIA Tesla A100 GPUs. Adam optimizer (Kingma and Ba 2015) with β1 = 0.9, β2 = 0.999, and ϵ = 1e 8 is used with the initial learning rate set to 1 10 4. The learning rate is halved at one-third and two-thirds of the total number of iterations. We set batch size to 8 and trained our model for 200 epochs. |