FFNet: Frequency Fusion Network for Semantic Scene Completion
Authors: Xuzhi Wang, Di Lin, Liang Wan2550-2557
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate FFNet intensively on the public SSC benchmarks, where FFNet surpasses the state-of-the-art methods. |
| Researcher Affiliation | Academia | Xuzhi Wang, Di Lin , Liang Wan College of Intelligence and Computing, Tianjin University {wangxuzhi, di.lin, lwan}@tju.edu.cn |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code package of FFNet is available at https://github.com/alan WXZ/FFNet. |
| Open Datasets | Yes | We evaluate FFNet intensively on the public SSC benchmarks... We evaluate FFNet intensively on the public SSC benchmarks, where FFNet surpasses the state-of-the-art methods. The code package of FFNet is available at https://github.com/alan WXZ/FFNet. We evaluate FFNet intensively on the public SSC benchmarks... The completion task aims to infer the 3D geometry occupancy of the voxelized scene and the semantic label of each voxel, simultaneously (Song et al. 2017; Liu et al. 2018a; Zhang et al. 2019). |
| Dataset Splits | No | The paper mentions training on NYU and NYU CAD datasets but does not explicitly provide details about specific train/validation/test splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | Yes | We train our model with batch size 6 in 2 Ge Force GTX 3090 Ti GPUs. |
| Software Dependencies | No | The paper states 'We implement our framework in Py Torch' but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We train our model with batch size 6 in 2 Ge Force GTX 3090 Ti GPUs. We adopt mini-batch SGD with a momentum of 0.9 and weight decay of 0.0005. For both NYU and NYU CAD datasets, we train our network for 350 epochs with an initial learning rate of 0.1. We use a poly learning rate policy where the initial learning rate is updated by (1 iteration max iteration)0.9. |