Unsupervised Multi-View Object Segmentation Using Radiance Field Propagation
Authors: Xinhang Liu, Jiaben Chen, Huai Yu, Yu-Wing Tai, Chi-Keung Tang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that RFP achieves feasible segmentation results that are more accurate than previous unsupervised image/scene segmentation approaches, and are comparable to existing supervised Ne RF-based methods. Extensive experimental validation and extensive ablation studies justify the design of each component and demonstrate the effectiveness of our system on applications such as individual object rendering and editing. |
| Researcher Affiliation | Collaboration | Xinhang Liu1 Jiaben Chen2 Huai Yu3 Yu-Wing Tai1,4 Chi-Keung Tang1 1HKUST 2UC San Diego 3Wuhan University 4Kuaishou Technology |
| Pseudocode | No | The paper describes its methods through prose and equations, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | Not yet, but we will make the code and data publicly available upon acceptance. |
| Open Datasets | Yes | We first test RFP on scenes with a single foreground object using two real datasets, Local Light Field Fusion (LLFF) [22] and Common Objects in 3D (CO3D) [30]. To test RFP on scenes with multiple objects, we build a synthetic dataset upon Clevr Tex [16]. |
| Dataset Splits | Yes | We split all the datasets into training views and testing views with a ratio of around 9 to 1. |
| Hardware Specification | No | The main text of the paper does not specify the exact hardware used for experiments (e.g., specific GPU or CPU models). It defers this information to the supplemental material. |
| Software Dependencies | No | The paper mentions various software components and frameworks but does not provide specific version numbers for software dependencies (e.g., Python, PyTorch, CUDA). |
| Experiment Setup | No | The paper states 'We gently urge readers to check the supplementary material for more qualitative results in the form of pictures and videos, as well as the settings of our experiments in detail.', indicating that specific experimental setup details are not in the main text. |