ACRF: Compressing Explicit Neural Radiance Fields via Attribute Compression
Authors: Guangchi Fang, Qingyong Hu, Longguang Wang, Yulan Guo
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments, which include both synthetic and real-world datasets such as Synthetic-Ne RF and Tanks&Temples, demonstrate the superior performance of our proposed algorithm. |
| Researcher Affiliation | Academia | 1Sun Yat-sen University, 2University of Oxford, 3Aviation University of Air Force, 4National University of Defense Technology |
| Pseudocode | No | The paper describes its methods in prose and with mathematical equations, but it does not include structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to open-source code for the described methodology. |
| Open Datasets | Yes | Datasets. We conduct experiments on two datasets: Synthetic-Ne RF (Mildenhall et al., 2020). This is a view synthesis dataset consisting of 8 synthetic scans, with 100 views used for training and 200 views for testing. Tanks&Temples (Knapitsch et al., 2017). |
| Dataset Splits | No | The paper mentions '100 views used for training and 200 views for testing' for Synthetic-Ne RF, but does not explicitly specify a validation set or detailed split percentages for reproduction. |
| Hardware Specification | Yes | For consistency and fairness across all experiments, we utilize a workstation equipped with an Intel Xeon Silver 4210 CPU @2.20 GHz and an NVIDIA TITAN RTX GPU. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | For VQRF and ACRF, we set the same iteration (10K) for finetuning and joint learning (the main part of EM). We set the number of iterations to 100 and the sampling percentage to 1% empirically. |