A Pre-convolved Representation for Plug-and-Play Neural Illumination Fields
Authors: Yiyu Zhuang, Qi Zhang, Xuan Wang, Hao Zhu, Ying Feng, Xiaoyu Li, Ying Shan, Xun Cao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments demonstrate the superior performance of novel-view rendering compared to previous works, and the capability to re-render objects under arbitrary Ne RF-style environments opens up exciting possibilities for bridging the gap between virtual and real-world scenes. Tab. 1 and Fig. 4 show quantitative and qualitative comparisons of our method against baseline methods, respectively. Metrics in Tab. 1 are averaged over all scenes while the full experiments are accessible in supplement. |
| Researcher Affiliation | Collaboration | Yiyu Zhuang1* Qi Zhang2*, Xuan Wang3, Hao Zhu1, Ying Feng2, Xiaoyu Li2, Ying Shan2, Xun Cao1 1Najing University, Nanjing, China 2Tencent AI Lab, Shenzhen, China 3Ant Group, Hangzhou, China |
| Pseudocode | No | The paper describes its method using equations and narrative text, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | No | The paper does not include any explicit statements or links indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We then test our method on 8 real-world scenes from Blended MVS (Yao et al. 2020) and Bag of Chips (Park, Holynski, and Seitz 2020), which provide images and depth maps reconstructed by MVS methods or RGBD camera. |
| Dataset Splits | Yes | In our setting, 390 views are rendered around the upper hemisphere of the object, with 180 for training, 10 for validation, and 200 for testing. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions implementation 'with Py Torch' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The number of samples for both the coarse and fine phases is 128. We use the same architecture as Mip-Ne RF (8 layers, 256 hidden units, Re LU activations) to train our ambient MLP network, but we apply the ILE module to featurize the input coordinates of incoming rays. We also use an 8-layer MLP with a feature size of 512 and a skip connection in the middle to represent the material MLP network. ... Overall, the entire loss in our method is l = lrec +λsls +λplpre, where the weights λs and λp are empirically set to 10 4 and 10 1 in all our experiments. |