Active Object Reconstruction Using a Guided View Planner

Authors: Xin Yang, Yuanbo Wang, Yaru Wang, Baocai Yin, Qiang Zhang, Xiaopeng Wei, Hongbo Fu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our model (1) increases our reconstruction accuracy with an increasing number of views (2) and generally predicts a more informative sequence of views for object reconstruction compared to other alternative methods.
Researcher Affiliation Academia 1 Dalian University of Technology 2 City University of Hong Kong xinyang@dlut.edu.cn, yuanbodlut@gmail.com, wangyaru@mail.dlut.edu.cn {ybc, zhangq, xpwei}@dlut.edu.cn, hongbofu@cityu.edu.hk
Pseudocode No The paper describes the network architecture and methodology but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a specific link or explicit statement about the release of their own source code.
Open Datasets Yes We used the dataset from [Yan et al., 2016], which is based on the Shape Net Core [Wu et al., 2015].
Dataset Splits No The paper mentions 'train/test data split' but does not explicitly provide details about a validation set or its split.
Hardware Specification Yes Our model was trained and tested under the Pytorch framework, accelerated by a GPU (NVIDIA GTX 1080Ti).
Software Dependencies No The paper mentions using 'Pytorch framework' and 'ADAM solver' but does not specify version numbers for these software dependencies.
Experiment Setup Yes We updated the weights by using ADAM solver with batchsize 16, epoch 200, λvox = λproj = 0.5. We set λv = 10, λp = 10, λm = 0.04.