Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias
Authors: Abhinav Gupta, Adithyavairavan Murali, Dhiraj Prakashchand Gandhi, Lerrel Pinto
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our models by physically executing grasps on a collection of novel objects in multiple unseen homes. The models trained with our home dataset showed a marked improvement of 43.7% over a baseline model trained with data collected in lab. Our architecture which explicitly models the latent noise in the dataset also performed 10% better than one that did not factor out the noise. |
| Researcher Affiliation | Academia | Abhinav Gupta Adithyavairavan Murali Dhiraj Gandhi Lerrel Pinto The Robotics Institute Carnegie Mellon University Direct correspondence to: {abhinavg,amurali,dgandhi,lerrelp}@cs.cmu.edu |
| Pseudocode | No | The paper does not contain any clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing code for the methodology or a link to a code repository. |
| Open Datasets | No | The paper mentions collecting a dataset of about 28K grasps ('We collected a dataset of about 28K grasps.') and references the 'Lab-Baxter' dataset by citing a paper [4], but it does not provide concrete access information (e.g., URL, DOI, specific repository) for either its own collected dataset or the cited one. |
| Dataset Splits | No | The paper mentions training data and testing data ('For training, each datapoint consists of an image I...', 'For testing, we used 20 novel objects...'), and also refers to 'held-out data' for evaluation, but it does not provide specific percentages or counts for training, validation, or test splits. It does not mention a separate validation set split. |
| Hardware Specification | Yes | Our robot consists of a Dobot Magician robotic arm [14] mounted on a Kobuki mobile base [15]. The robotic arm came with four degrees of freedom (DOF) and we customized the last link with a two axis wrist. We also modified the original pneumatic gripper with a two-fingered electric gripper [16]. An Intel R200 RGBD [17] camera was also mounted with a pan-tilt attachment at a height of 1m above the ground. All the processing for the robot is performed an on-board laptop [18] attached on the back. The laptop has intel core i5-8250U processor with 8GB of RAM and runs for around three hours on a single charge. |
| Software Dependencies | No | The paper mentions software like 'Py Torch [21]', 'Res Net-18 [22]', 'tiny-YOLO [24]', and 'Adam [23]', but it does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We train our network in two stages. First, we only train GPN using the noisy patch... This training is done over five epochs of the data. In the second stage, we add the NMN and marginalization operator to simultaneously train NMN and GPN in an end-to-end fashion. This is done over 25 epochs of the data. ... The optimizer used for training is Adam [23]. |