Unsupervised Emergence of Egocentric Spatial Structure from Sensorimotor Prediction

Authors: Alban Laflaquière, Michael Garcia Ortiz

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We simulate two different agent-environment setups: Discrete world and Arm in a room. The first one is a minimalist artificial setup designed to test our hypotheses in optimal conditions. ... We evaluate the three types of exploration on the two experimental setups. Each simulation is run 50 times, with all random parameters drawn independently on each trial. During training, the measures Dtopo and Dmetric are evaluated on a fixed regular sampling of the motor space. Their evolution, as well as the evolution of the loss, are displayed in Fig. 2. Additionally, Fig. 3 shows the final representations h of the regular motor sampling, for one randomly selected trial of each simulation.
Researcher Affiliation Industry Alban Laflaquière AI Lab, Soft Bank Robotics Europe Paris, France alaflaquiere@softbankrobotics.com Michael Garcia Ortiz AI Lab, Soft Bank Robotics Europe Paris, France mgarciaortiz@softbankrobotics.com
Pseudocode No The paper describes the neural network architecture and the experimental procedures in prose and with diagrams, but it does not include a dedicated pseudocode block or algorithm listing.
Open Source Code No The paper does not contain any explicit statement about making the source code available, nor does it provide a link to a code repository.
Open Datasets No The paper uses data generated from custom-built simulations ('Discrete world' and 'Arm in a room') rather than a publicly available dataset with a specific access link or citation.
Dataset Splits No The paper describes data generation through simulations and evaluates measures 'during training' on a 'fixed regular sampling of the motor space', but it does not specify explicit training, validation, or test dataset splits in terms of percentages, sample counts, or predefined partition references.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments, such as GPU or CPU models, or memory specifications.
Software Dependencies No The paper mentions software components like 'Multi-Layer Perceptron (MLP)', the 'Adam' optimizer [19], and 'Rectified linear units' [30], but it does not provide specific version numbers for any programming languages, libraries, or frameworks used (e.g., Python, TensorFlow, PyTorch).
Experiment Setup Yes The Netenc and Netpred modules are both simple MLPs with 3 hidden layers of 256 units. All hidden units use ReLU activations [30], except for the last layer of Netpred, which uses a linear activation. The network is trained end-to-end using the Adam optimizer [19] with a learning rate of 10−4.