3D Shape Reconstruction from Vision and Touch

Authors: Edward Smith, Roberto Calandra, Adriana Romero, Georgia Gkioxari, David Meger, Jitendra Malik, Michal Drozdzal

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experimental Results In the following section, we describe the experiments designed to validate our approach to 3D reconstruction that leverages both visual and haptic sensory information. We start by outlining our model selection process. Then, using our best model, we validate generalization of the complementary role of vision and touch for 3D shape reconstruction. We follow by examining the effect of increasing number of grasps and then measure the ability of our approach to effectively extrapolate around touch sites. For all experiments, details with respect to experiment design, optimization procedures, hardware used, runtime, and hyper-parameters considered can be found in the supplemental materials.
Researcher Affiliation Collaboration 1 Facebook AI Research 2 Mc Gill University 3 University of California, Berkeley
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code for our system is publicly available on a Git Hub repository, to ensure reproducible experimental comparison.2 2https://github.com/facebookresearch/3D-Vision-and-Touch
Open Datasets No The paper states, 'To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects.' and 'we build a dataset of simulated haptic object interactions to benchmark 3D shape reconstructions algorithms in this setting;'. While a new dataset is introduced for benchmarking, there is no explicit statement or link confirming its public availability for direct download or access as required.
Dataset Splits No The paper mentions using a 'validation set' for model selection and states 'We split the dataset into training and test sets with approximately a 90:10 ratio.' However, it does not provide specific percentages or sample counts for the validation split itself, only the overall train/test ratio.
Hardware Specification No The paper states: 'For all experiments, details with respect to experiment design, optimization procedures, hardware used, runtime, and hyper-parameters considered can be found in the supplemental materials.' No specific hardware details are provided in the main text.
Software Dependencies No The paper mentions 'Pytorch [49]' as a software used, but does not provide a specific version number. No other software dependencies with version numbers are listed in the main text.
Experiment Setup No The paper states: 'For all experiments, details with respect to experiment design, optimization procedures, hardware used, runtime, and hyper-parameters considered can be found in the supplemental materials.' No specific experimental setup details (like hyperparameters) are provided in the main text.