Learning to Act for Perceiving in Partially Unknown Environments

Authors: Leonardo Lamanna, Mohamadreza Faridghasemnia, Alfonso Gerevini, Alessandro Saetti, Alessandro Saffiotti, Luciano Serafini, Paolo Traverso

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally evaluate the proposed approach on several synthetic datasets, and show the feasibility of our approach in a real-world scenario that involves noisy perceptions and noisy actions on a real robot.
Researcher Affiliation Academia 1Fondazione Bruno Kessler, Trento, Italy 2Center for Applied Autonomous Sensor Systems, University of Orebro, Sweden 3Department of Information Engineering, University of Brescia, Italy
Pseudocode Yes Algorithm 1 FIND BELIEF STATES
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes Cifar10, Cifar100, Euro SAT, FER, MNIST, Oxford Pet
Dataset Splits No Table 1 provides '#Train' and '#Test' set sizes but no explicit 'Validation' set split information. The paper generally refers to training on the 'noisy training set' and evaluating on the 'noisy test set'.
Hardware Specification No The paper mentions 'a Softbank Robotic s Pepper humanoid robot' for real-world experiments, but does not provide specific hardware specifications (e.g., CPU, GPU models, or memory) for the computational resources used for training or conducting the main experiments.
Software Dependencies No The paper mentions general software components like 'Res Net' and 'planner Fast Downward' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes All images have been modified by: (i) decreasing the brightness by 90% with a 0.5 probability; (ii) blurring the image by 50% with a 0.5 probability; and (iii) adding an occluding circle centered in a position uniformly sampled from the image size and with a diameter equal to 70% of the image size. Our approach (with a confidence threshold t = 0.9)... K-means algorithm with K = 8. The viewpoints where the property is observable are obtained by selecting the clusters with an average confidence higher than a threshold t = 0.8.