BiCo-Net: Regress Globally, Match Locally for Robust 6D Pose Estimation
Authors: Zelin Xu, Yichen Zhang, Ke Chen, Kui Jia
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on three popularly benchmarking datasets can verify that our method can achieve state-of-the-art performance, especially for the more challenging severe occluded scenes. |
| Researcher Affiliation | Academia | Zelin Xu1 , Yichen Zhang1 , Ke Chen1,2, and Kui Jia1,2, 1South China University of Technology 2Peng Cheng Laboratory {eexuzelin, eezyc}@mail.scut.edu.cn, {chenk, kuijia}@scut.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Source codes are available at https://github.com/Gorilla-Lab-SCUT/Bi Co-Net. |
| Open Datasets | Yes | To evaluate our Bi Co-Net comprehensively, experiments are conducted on three popular benchmarks the YCB-Video dataset [Xiang et al., 2018], the Line MOD [Hinterstoisser et al., 2011], and the more challenging Occlusion Line MOD [Brachmann et al., 2014]. |
| Dataset Splits | No | The paper specifies training and testing splits for the datasets but does not explicitly detail a separate validation dataset split (e.g., in terms of percentages or counts). |
| Hardware Specification | Yes | As a result, the average time for processing a frame for inference is 75ms with a GTX 1080 Ti GPU |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not specify any software libraries with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The numbers of scene/model points, i.e. , N/M, are set to 1000/1000. In point-pair pose computation, we downsample the scene points and model points to Z = 100 points by the FPS... The hyper-parameter λ in the losses of BCM-S and BCM-M branches is empirically set to 0.05. We use the Adam optimizer with a 10 4 learning rate to train our model for 50 epochs, and the learning rate decays 0.3 per 10 epochs. |