Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Collision Avoiding Max-Sum for Mobile Sensor Teams
Authors: Arseniy Pertzovsky, Roie Zivan, Noa Agmon
JAIR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present an experimental evaluation in extensive simulations, showing that CAMS produces high-quality collision-free coverage also in large and complex scenarios. We further present evidence from experiments in a real multi-robot system that CAMS outperforms the state of the art in terms of convergence time. |
| Researcher Affiliation | Academia | Arseni Pertzovskiy EMAIL Roie Zivan EMAIL Department of Industrial Engineering and Management Ben-Gurion University of the Negev Beer Sheva, Israel Noa Agmon EMAIL Department of Computer Science and Engineering Bar-Ilan University Ramat Gan, Israel |
| Pseudocode | No | The paper describes the CAMS algorithm and its components in detail, including message calculations and properties. However, it does not present a formal pseudocode block or algorithm listing labeled as such. |
| Open Source Code | Yes | Our software simulation included four environments from the aforementioned benchmark: empty (48 48), random (32 32), warehouse (63 161), and game (180 251). A demonstration of the environments is presented in Figure 9. The number of agents included in each of the scenarios was 20 and the number of targets was 10. For each environment, we tested the algorithms in both static and dynamic settings. In the static settings, the number of targets and their locations were constant, while in the dynamic setting targets randomly switched to new positions every π‘steps (we used π‘= 40). The ER values of all targets were 100, the sensing range of all sensors, π ππ, was set to 5, and the credibility of each sensor was 22. The positive utilities of location function-nodes were selected randomly from the range [10 10, 10 5]. In each step of the algorithm, each mobile sensor could either move to one of the adjacent locations or stay in its current location. For each environment, the experiments included 20 scenarios, and we report the average remaining coverage requirement and the accumulated number of collisions obtained by each algorithm solving these scenarios. Each algorithm performed 200 steps, where in each step the mobile sensors selected locations. We compared CAMS with six other algorithms: (1) Max-sum_MST (including FMR and OVP), in which the mobile sensor movements were not affected by collisions; (2) Max-sum_MST with breakdowns, in which colliding agents exhibited a breakdown and stopped moving, but kept on sensing and communicating with other sensors; (3) DSA_MST, the standard DSA version described in (Yedidsion et al., 2018); (4) DSA_MST with breakdowns. Like in the Max-sum_MST version, the results of a collision in this version was that the agents stopped moving but kept on sensing; (5) CADSA, a collision avoiding version of DSA_MST. We ranked the mobile sensors according to their indexes. Each mobile sensor updated its neighbors before moving to a new location. A mobile sensor did not move to a new location if a different mobile sensor with a higher rank reported that it plans to move to the same location. (6) DSSA, the distributed stochastic search algorithm, designed to avoid collisions between ships (Hirayama et al., 2019). DSSA allows mobile sensors to keep suggesting locations they intend to move to while checking for collisions until they converge to a collision-avoiding decision in each step. (7) Random walk, serves as a baseline for the number of expected collisions. CAMS and Max-sum_MST performed 10 iterations of Max-sum in each step before the mobile sensors selected their locations. The remaining coverage requirement in each step was calculated as ππ πππ(ππ). We performed t-tests with π< 0.01 between the remaining coverage results at the 200 steps for each algorithm solving each scenario, to evaluate statistical significance when comparing the average results produced by the different algorithms and report standard deviations as well (presented in the Tables 2 and 3 in the appendix). 5. https://github.com/Arseni1919/dcop_simulator_5 |
| Open Datasets | Yes | We used in our experiments a classical Grid-based MAPF benchmark (Stern et al., 2019; Sharon et al., 2015). This publicly available6 benchmark consists of 24 different grids that were based on maps of real cities, video games, open areas with and without random obstacles, maze-like environments, and room-like spaces. Those grids are based on the Moving AI pathfinding repository7 (Sturtevant, 2012). 6. https://movingai.com/benchmarks/mapf.html 7. https://movingai.com/benchmarks/grids.html |
| Dataset Splits | No | The paper describes testing algorithms in 'static and dynamic settings' on different environments (empty, random, warehouse, game) from a MAPF benchmark, with 20 scenarios per environment and 200 steps for each algorithm. It specifies target and agent configurations, but it does not mention traditional train/test/validation dataset splits typically found in machine learning contexts. |
| Hardware Specification | Yes | The first set of experiments was performed in software simulation, implemented in Python5, allowing for rigorous evaluation of CAMS compared to existing algorithms. We used a Mac Book Air with an Apple M1 chip and 8GB of RAM. In the second set of experiments, we have fully implemented CAMS on a physical multi-robot system that included 3 Hamster robots (Cogniteam, 2020). |
| Software Dependencies | No | The paper mentions implementation 'in Python' and the use of 'a move_base ROS package' but does not specify version numbers for Python, ROS, or any other libraries or dependencies used, which would be necessary for reproducibility. |
| Experiment Setup | Yes | We used in our experiments a classical Grid-based MAPF benchmark (Stern et al., 2019; Sharon et al., 2015). This publicly available6 benchmark consists of 24 different grids that were based on maps of real cities, video games, open areas with and without random obstacles, maze-like environments, and room-like spaces. Those grids are based on the Moving AI pathfinding repository7 (Sturtevant, 2012). Our software simulation included four environments from the aforementioned benchmark: empty (48 48), random (32 32), warehouse (63 161), and game (180 251). A demonstration of the environments is presented in Figure 9. The number of agents included in each of the scenarios was 20 and the number of targets was 10. For each environment, we tested the algorithms in both static and dynamic settings. In the static settings, the number of targets and their locations were constant, while in the dynamic setting targets randomly switched to new positions every π‘steps (we used π‘= 40). The ER values of all targets were 100, the sensing range of all sensors, π ππ, was set to 5, and the credibility of each sensor was 22. The positive utilities of location function-nodes were selected randomly from the range [10 10, 10 5]. In each step of the algorithm, each mobile sensor could either move to one of the adjacent locations or stay in its current location. For each environment, the experiments included 20 scenarios, and we report the average remaining coverage requirement and the accumulated number of collisions obtained by each algorithm solving these scenarios. Each algorithm performed 200 steps, where in each step the mobile sensors selected locations. We compared CAMS with six other algorithms: (1) Max-sum_MST (including FMR and OVP), in which the mobile sensor movements were not affected by collisions; (2) Max-sum_MST with breakdowns, in which colliding agents exhibited a breakdown and stopped moving, but kept on sensing and communicating with other sensors; (3) DSA_MST, the standard DSA version described in (Yedidsion et al., 2018); (4) DSA_MST with breakdowns. Like in the Max-sum_MST version, the results of a collision in this version was that the agents stopped moving but kept on sensing; (5) CADSA, a collision avoiding version of DSA_MST. We ranked the mobile sensors according to their indexes. Each mobile sensor updated its neighbors before moving to a new location. A mobile sensor did not move to a new location if a different mobile sensor with a higher rank reported that it plans to move to the same location. (6) DSSA, the distributed stochastic search algorithm, designed to avoid collisions between ships (Hirayama et al., 2019). DSSA allows mobile sensors to keep suggesting locations they intend to move to while checking for collisions until they converge to a collision-avoiding decision in each step. (7) Random walk, serves as a baseline for the number of expected collisions. CAMS and Max-sum_MST performed 10 iterations of Max-sum in each step before the mobile sensors selected their locations. The remaining coverage requirement in each step was calculated as ππ πππ(ππ). We performed t-tests with π< 0.01 between the remaining coverage results at the 200 steps for each algorithm solving each scenario, to evaluate statistical significance when comparing the average results produced by the different algorithms and report standard deviations as well (presented in the Tables 2 and 3 in the appendix). In the next set of experiments our goal was to examine the practicality of CAMS in a physical multirobot system, and specifically to evaluate the delay caused by collisions between robots in such a realistic setting. Thus, we have fully implemented CAMS and Max-sum_MST on a mobile sensing team composed of three Hamster robots (Cogniteam, 2020) and two targets, placed in a 4 4 grid, where the size of each vertex of the grid was one square meter. The targets ER was set to 60, and they were placed randomly in non-adjacent vertices of the grid. The sensors credibility and the targets requirements were identical to the software simulation experiments. The number of steps performed was 10. In each of them, the robots constructed a factor graph in which each of the robots performed the role of its own variable-node. The location and target function-nodes were performed by the nearest robot (qualities were broken by index). Thus, the implementation was completely distributed. The robots performed 30 iterations of Max-sum in each step, before selecting locations, moving, and beginning the next step. When a collision occurred, the robots had to sort out the conflict and continue to the selected location. This resulted in a delay, and thus, we report the time for arriving to the coverage solution. We selected four different positions for the targets, and for each of them five different positions for the robots, resulting in 20 experiments for each algorithm. The static obstacles were avoided by using a move_base ROS package. |