GNeSF: Generalizable Neural Semantic Fields

Authors: Hanlin Chen, Chen Li, Mengqi Guo, Zhiwen Yan, Gim Hee Lee

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our approach achieves comparable performance with scene-specific approaches. More importantly, our approach can even outperform existing strong supervisionbased approaches with only 2D annotations. 5 Experiments We evaluate our method on the tasks of semantic view synthesis in Sec. 5.1.1 and 3D semantic segmentation in Sec. 5.1.2. Additionally, we validate the effectiveness of the proposed modules in Sec. 5.2.
Researcher Affiliation Academia Hanlin Chen Chen Li Mengqi Guo Zhiwen Yan Gim Hee Lee Department of Computer Science, National University of Singapore {hanlin.chen, gimhee.lee}@comp.nus.edu.sg
Pseudocode No No pseudocode or algorithm blocks are explicitly presented in the paper.
Open Source Code Yes Our source code is available at: https://github.com/HLin Chen/GNe SF.
Open Datasets Yes We perform the experiments on two indoor datasets: Scan Net (V2) [13] and Replica [38].
Dataset Splits Yes We follow [31] and [40] on the three training/validation/test splits commonly used in previous works.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper mentions software components like Mask2Former and Swin-Transformer but does not provide specific version numbers for these or other software dependencies.
Experiment Setup No The paper does not explicitly state specific hyperparameters (e.g., learning rate, batch size, epochs) or detailed training configurations in the main text.