Associate Latent Encodings in Learning from Demonstrations

Authors: Hang Yin, Francisco Melo, Aude Billard, Ana Paiva

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The implementation and results are demonstrated in a robotic handwriting scenario, where the visual sensory input and the arm joint writing motion are learned and coupled. We show the latent representations successfully construct a task manifold for the observed sensor modalities. Moreover, the learned associations can be exploited to directly synthesize arm joint handwriting motion from an image input in an end-to-end manner.
Researcher Affiliation Academia 1GAIPS, INESC-ID and Instituto Superior T ecnico, Universidade de Lisboa 2Learning Algorithms and Systems Laboratory, Ecole Polytechnique F ed erale de Lausanne {hang.yin, aude.billard}@epfl.ch {fmelo, ana.paiva}@inesc-id.pt
Pseudocode No No explicit pseudocode or algorithm blocks were found.
Open Source Code Yes The code implementation and trained models are publicly accessible1. 1https://github.com/navigator8972/vae assoc
Open Datasets No The dataset used in the experiment is UJI Char Pen 2 dataset, from which, for simplicity, only one-stroke-formed alphabetical letters and digits are involved. No direct link, DOI, or formal citation for public access is provided within the paper text.
Dataset Splits No The other hyper parameters, including the length of the latent variable and the weight of association term, are selected according to the cross-validation of the reconstruction performance. However, specific percentages or counts for training, validation, or test splits are not provided.
Hardware Specification No No specific hardware details (like GPU or CPU models, memory, or processing units) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper mentions neural network models and ADAM for optimization but does not provide specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes The entire model is trained through the stochastic gradient descent with an adaptive moment estimation (ADAM) (Kingma and Ba 2015), a learning rate of 1e 4 and a batch size of 64. The other hyper parameters, including the length of the latent variable and the weight of association term, are selected according to the cross-validation of the reconstruction performance.