Mip-Grid: Anti-aliased Grid Representations for Neural Radiance Fields

Authors: Seungtae Nam, Daniel Rho, Jong Hwan Ko, Eunbyung Park

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that mip-Grid greatly improves the rendering performance of both methods and even outperforms mip-Ne RF on multi-scale datasets while achieving significantly faster training time. We conducted extensive ablation studies and visualized the learned kernels to help readers understand how the proposed method works internally.
Researcher Affiliation Collaboration Seungtae Nam1 Daniel Rho2 Jong Hwan Ko1,3 Eunbyung Park1,3 1Department of Artificial Intelligence, Sungkyunkwan University 2AI2XL, KT 3Department of Electrical and Computer Engineering, Sungkyunkwan University
Pseudocode No No structured pseudocode or algorithm blocks were found.
Open Source Code Yes For code and demo videos, please see https://stnamjef.github.io/mipgrid.github.io/.
Open Datasets Yes All models were evaluated on the multi-scale Blender dataset [1] with three different metrics: PSNR, SSIM, and LPIPS (VGG) [46]. Mip-Tenso RF and its baseline models were further evaluated on the multi-scale LLFF dataset.
Dataset Splits No The paper mentions training on the multi-scale Blender and LLFF datasets and evaluating on their test sets, but it does not specify explicit train/validation/test dataset splits (e.g., percentages or sample counts) for reproduction.
Hardware Specification Yes The elapsed training time, evaluated using a single RTX 4090 GPU, are shown at the rightmost column.
Software Dependencies No The paper does not provide specific software dependencies (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes Both models apply convolution to a shared grid representation using four learnable kernels of size 3, where each kernel is responsible for generating different scales of grids. We trained mip Tenso RF for 40k iterations on the multi-scale Blender dataset and 25k iterations on the multi-scale LLFF dataset... we trained mip-K-Planes for 30k iterations and optimized the kernels from the beginning. We used the discrete scale coordinate to train mip-Tenso RF and mip-K-Planes, and for mip-Tenso RF, we conducted a further evaluation using the continuous scale coordinate and the 2D scale coordinate, examining the effectiveness of using more sophisticated scale coordinates.