AttAN: Attention Adversarial Networks for 3D Point Cloud Semantic Segmentation
Authors: Gege Zhang, Qinghua Ma, Licheng Jiao, Fang Liu, Qigong Sun
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on Scan Net and S3DIS datasets show that this framework effectively improves the segmentation quality and outperforms other state-of-the-art methods. ... To evaluate our approach, we conduct extensive and comprehensive experiments on public semantic segmentation datasets Scan Net [Dai et al., 2017] and S3DIS [Armeni et al., 2016]. Experimental results show that Att AN achieves state-of-the-art performance. |
| Researcher Affiliation | Academia | Gege Zhang , Qinghua Ma , Licheng Jiao , Fang Liu and Qigong Sun Xidian University {ggzhang 1, qhma}@stu.xidian.edu.cn, lchjiao@mail.xidian.edu.cn, {f63liu, xd qigongsun}@163.com |
| Pseudocode | Yes | Algorithm 1 The adversarial training process in Att AN |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | Scan Net. The newest version of this dataset includes 1513 scanned and reconstructed scenes... Then we submit our results on test scenes to the official benchmark evaluation server3 (http://kaldir.vc.in.tum.de/scannet benchmark/semantic label 3d)... Stanford Large-Scale 3D Indoor Spaces (S3DIS). This dataset contains scanned point cloud data... We process the dataset similar as [Qi et al., 2017a]. |
| Dataset Splits | Yes | Scan Net. ... During the training phase, we use 1201 scenes for training and 312 scenes for validating, both without extra RGB information. ... Moreover, following [Armeni et al., 2016; Qi et al., 2017a], 6-fold cross validation on all areas is adopted for further evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions various algorithms and optimizers but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | We use an Adam optimizer with momentum parameters β1 = 0.5, β2 = 0.999, and a batch size of 16 to train the model. ... The base learning rate is set to 0.001. ... we employ exponential moving average strategy with decay rate of 0.99... In our experiments, τ is set to 1.0. ... Pre-train Sθ by minimizing the first term of Eq.6. Then training process is executed in way of training A on every even epoch and S on every epoch... |