Moving-Landmark Assisted Distributed Learning Based Decentralized Cooperative Localization (DL-DCL) with Fault Tolerance

Authors: Shubhankar Gupta, Suresh Sundaram

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Simulations involving sensor failures inducing around 40-60 times increase in the nominal bias show DL-DCL s estimation performance to be approximately 40% better than the well-known covariance-based estimate fusion methods. For the evaluation of DL-DCL s implementability and fault-tolerance capability in practice, a high-fidelity simulation is carried out in Gazebo with ROS2.
Researcher Affiliation Academia Shubhankar Gupta, Suresh Sundaram Artificial Intelligence and Robotics Lab (AIRL) Department of Aerospace Engineering Indian Institute of Science, Bengaluru, Karnataka, India shubhankarg@iisc.ac.in, vssuresh@iisc.ac.in
Pseudocode Yes Algorithm 1: DL-DCL algorithm for the ith agent, i [N]
Open Source Code Yes For the evaluation of its sim2real aspect, DL-DCL is also simulated in Gazebo with ROS2 (video and Git Hub links in the supplementary document1).
Open Datasets No The paper describes a simulation scenario rather than using a publicly available dataset, stating 'Results are averaged over 50 simulation runs. Each simulation run is carried out for a horizon of T = 1400 discrete time steps, with a sampling period of T = 0.1 second.' No concrete access information for a public dataset is provided.
Dataset Splits No The paper describes a simulation scenario and does not mention explicit training, validation, or test dataset splits. It states: 'Results are averaged over 50 simulation runs. Each simulation run is carried out for a horizon of T = 1400 discrete time steps, with a sampling period of T = 0.1 second.'
Hardware Specification No The paper mentions 'a high-fidelity simulation is carried out in Gazebo with ROS2' but does not specify any hardware details such as CPU, GPU models, or memory.
Software Dependencies No The paper mentions 'Gazebo with ROS2' but does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes For the DL-DCL algorithm, the learning parameters are set to the values ηw = 2 and ηγ = 2 via parameter tuning. DL-DCL periodically resets its cumulative loss variables to zero after every To = 200 discrete time steps to avoid bias build-up during learning. The noise νx t,i and νϕ t,i in the IMUs (...) is assumed to be Gaussian with a mean of 0.05m and 0.5 deg., respectively, with a covariance of 0.1 (0.05)2m2 and 0.1 (0.05)2rad.2, respectively.