Learning to Poke by Poking: Experiential Learning of Intuitive Physics
Authors: Pulkit Agrawal, Ashvin V. Nair, Pieter Abbeel, Jitendra Malik, Sergey Levine
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our model is evaluated on a real-world robotic manipulation task that requires displacing objects to target locations by poking. The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. Our experiments show that this joint modeling approach outperforms alternative methods. |
| Researcher Affiliation | Academia | Pulkit Agrawal Ashvin Nair Pieter Abbeel Jitendra Malik Sergey Levine Berkeley Artificial Intelligence Research Laboratory (BAIR) University of California Berkeley {pulkitag,anair17,pabbeel,malik,svlevine}@berkeley.edu |
| Pseudocode | No | The paper describes the model training and planning process in narrative text and with diagrams, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states: 'Supplementary Materials: and videos can be found at http://ashvin.me/pokebot-website/.' This is a project website, but the paper does not explicitly state that the source code for the methodology is released or available at this link. It does not provide a direct link to a code repository. |
| Open Datasets | No | The robot gathered over 400 hours of experience by executing more than 100K pokes on different objects. To date our robot has interacted with objects for more than 400 hours and in process collected more than 100K pokes on 16 distinct objects. The paper does not provide information on how to access this collected dataset publicly. |
| Dataset Splits | No | The paper mentions 'Model training was performed' and discusses different amounts of 'training data (10K, 20K examples)' used in simulation studies. However, it does not provide explicit details on how the dataset was split into training, validation, and test sets (e.g., percentages, sample counts, or methods like cross-validation). |
| Hardware Specification | Yes | The robot is equipped with a Kinect camera and a gripper for poking objects kept on a table in front of it. We are grateful to NVIDIA corporation for donating K40 GPUs and providing access to the NVIDIA PSG cluster. |
| Software Dependencies | No | The paper describes the use of deep neural networks and refers to 'Alex Net (Krizhevsky et al., 2012)' architecture but does not specify any software names with version numbers (e.g., TensorFlow, PyTorch, or specific Python libraries and their versions) used for implementation or experiments. |
| Experiment Setup | Yes | For modeling multimodal poke distributions, poke location, angle and length of poke are discretized into a 20 20 grid, 36 bins and 11 bins respectively. The 11th bin of the poke length is used to denote no poke. We used λ = 0.1 in all our experiments. More details about model training are provided in the supplementary materials. |