See and Think: Disentangling Semantic Scene Completion
Authors: Shice Liu, YU HU, Yiming Zeng, Qiankun Tang, Beibei Jin, Yinhe Han, Xiaowei Li
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that regardless of inputing a single depth or RGB-D, our framework can generate high-quality semantic scene completion, and outperforms state-of-the-art approaches on both synthetic and real datasets. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Computer Architecture Institute of Computing Technology, Chinese Academy of Sciences 2 University of Chinese Academy of Sciences |
| Pseudocode | No | The paper describes the network architecture and processing steps in detail but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code is available at https://github.com/Shice Liu/SATNet. |
| Open Datasets | Yes | We evaluate our framework on two benchmark datasets, including the popular NYUv2 dataset [41] and the large-scale 3D scene repository SUNCG dataset [1]. |
| Dataset Splits | Yes | The NYUv2 dataset, a real dataset, is composed of 1449 RGB-D images and is standardly partitioned into 795 training samples and 654 testing samples... SUNCG-D consists of 139368 training samples and 470 testing samples, while SUNCG-RGBD consists of 13011 training samples and 499 testing samples. |
| Hardware Specification | Yes | It takes us around a week to accomplish the training period on Ge Force GTX 1080Ti GPU. |
| Software Dependencies | No | The paper states 'We implement our framework in Py Torch.' However, it does not specify version numbers for PyTorch or any other software dependencies, which would be necessary for reproducible setup. |
| Experiment Setup | Yes | We use cross entropy loss and SGD to optimize with a momentum of 0.9, a weight decay of 0.0001 and a batch size of 1. In addition, the learning rate of SNet and TNet is 0.001 and 0.01, respectively. |