I3DOL: Incremental 3D Object Learning without Catastrophic Forgetting
Authors: Jiahua Dong, Yang Cong, Gan Sun, Bingtao Ma, Lichen Wang6066-6074
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 3D representative datasets validate the superiority of our I3DOL framework. |
| Researcher Affiliation | Academia | 1State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China. 2Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110016, China. 3University of Chinese Academy of Sciences, Beijing, 100049, China. 4Northeastern University, Boston, USA. |
| Pseudocode | Yes | Algorithm 1 Optimization Framework of I3DOL Model. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | Generally, three representative point cloud datasets, i.e., Model Net (Zhirong Wu et al. 2015), Shape Net (Chang et al. 2015) and Scan Net (Dai et al. 2017) are employed to validate the superiority of our I3DOL model. |
| Dataset Splits | Yes | Model Net (Chang et al. 2015) consists of 9843 training samples and 2468 testing samples, which are clean 3D CAD models from 40 classes. ... Shape Net (Chang et al. 2015) contains 35037 training examples and 5053 validation examples. ... Scan Net (Dai et al. 2017) with 17 categories is composed of scanned and reconstructed real-world indoor scenes, where the training and validation samples are 12060 and 3416, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models (e.g., NVIDIA A100) or CPU models (e.g., Intel Core i7) used for running the experiments. |
| Software Dependencies | No | The paper mentions using Point Net as a backbone and Adam optimizer, but it does not specify version numbers for these or any other software dependencies (e.g., Point Net vX.Y, TensorFlow vX.Y, PyTorch vX.Y). |
| Experiment Setup | Yes | For the configuration of network architecture, we employ Point Net (Qi et al. 2017a) as the backbone framework of encoder E, and apply four-layer multi-layer perception as classifier C. Furthermore, we utilize the Adam optimizer for model optimization, where the learning rate and weight decay are initialized as 0.0025 and 0.0005. The number of constructed local geometric structures is set as 64... |