ReGS: Reference-based Controllable Scene Stylization with Gaussian Splatting

Authors: Yiqun Mei, Jiacong Xu, Vishal Patel

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we demonstrate that Re GS achieves state-of-the-art visual quality compared to existing stylization methods while enabling real-time view synthesis by embracing the fast rendering speed of Gaussian Splatting. (Abstract, page 1) and In this section, we demonstrate the stylization quality and our designs through extensive experiments. (Section 4, page 5)
Researcher Affiliation Academia Yiqun Mei Jiacong Xu* Vishal M. Patel Johns Hopkins University {ymei7,jxu155, vpatel36}@jhu.edu (Page 1)
Pseudocode Yes Algorithm 1: Texture-Guided Control and Algorithm 2: Structured Densification (Figure 12, page 9)
Open Source Code No Code will be released before the conference. (NeurIPS Paper Checklist, Question 5, page 11)
Open Datasets Yes The only available reference-based stylization dataset is provided by [10]. The dataset contains 12 selected scenes from Blender [14], LLFF [101], and Tanks and Temples [102]. (Section 4.1, Datasets, page 5)
Dataset Splits No The paper mentions training data and views used for supervision (pseudo views) but does not specify explicit train/validation/test dataset splits with percentages or sample counts.
Hardware Specification Yes The proposed method is implemented using Py Torch and trained on one A5000 GPU. (Section 3.6, page 5)
Software Dependencies No Re GS uses 3D Gaussians [11] as the scene representation and is built upon their official codebase. ... The proposed method is implemented using Py Torch... (Section 3.6, page 5). No specific version numbers for PyTorch or other libraries are given.
Experiment Setup Yes For texture-guided control, we start accumulating gradients after a warm-up of 100 iterations and then perform the densification operation based on the color gradient statistics of every 100 iterations. The control process stops when it reaches half of the total iterations. The gradient threshold is empirically set to 1e 5 at the beginning, and we linearly reduce it to 5e 6... For balancing parameters we set λrec = λtcm = 1, λdepth = 10, λview = 2, and λcolor = 15... We train our model for 3000 iterations. (Section 3.6, page 5)