Neural Inverse Kinematic
Authors: Raphael Bensadoun, Shir Gur, Nitsan Blau, Lior Wolf
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments demonstrate that IKNet outperforms a wide variety of IK methods, both optimization-based and learningbased. In the path following problem, our method generates multiple solutions, each more accurate and more stable than the single solution of the best baseline method. Additionally, we show that our probabilistic method displays robustness to noisy dimensions in the kinematic chain. Moreover, a relatively small number of examples is sufficient to finetune a trained model to perform well on a similar but unseen kinematic chain. Lastly, the representation learned by IKNet seems to help in learning other tasks. |
| Researcher Affiliation | Industry | 1Mentee Robotics. Correspondence to: Raphael Bensadoun <raphael@menteebot.com>, Shir Gur <shir@menteebot.com>. |
| Pseudocode | No | No structured pseudocode or algorithm blocks are present. |
| Open Source Code | No | The paper mentions third-party open-source projects (e.g., Digit Robot.jl, IKPy) but does not state that the code for their proposed method (IKNet) is open-source or provide a link to it. |
| Open Datasets | No | We train our model on a dataset of 20K random (reachable) points. No specific link, DOI, or formal citation is provided for this generated dataset. |
| Dataset Splits | No | We train our model on a dataset of 20K random (reachable) points, and test on a different set of 1K (reachable) points. No specific mention of a validation split. |
| Hardware Specification | No | For this purpose, the learning based methods were run on a CPU. This statement is too general and lacks specific hardware details (e.g., CPU model, GPU, memory). |
| Software Dependencies | No | The paper does not provide specific software names with version numbers for its implementation or key dependencies (e.g., PyTorch version, specific libraries). |
| Experiment Setup | Yes | The network f is a 4-layer fully-connected network. Each linear layer has a dimension of 1024, with Re LU and batchnorm following each layer. The last layer of f is followed by N projection layers that map the last dimension of 1024 to the vector of weights, θk, for each network gk. Each network is composed of three linear layers with a hidden dimension of 256, and Re LU activation between the layers. In order not to explicitly select an optimal value for the parameter m, we set it at a very high value of m = 50. |