Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Power and Limitations of Deception in Multi-Robot Adversarial Patrolling

Authors: Noga Talmor, Noa Agmon

IJCAI 2017 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have fully implemented the deception mechanisms, and following an empirical evaluation, report the tradeoff between deception and probability of penetration detection along the perimeter in several cases.
Researcher Affiliation Academia Noga Talmor and Noa Agmon Department of Computer Science, Bar-Ilan University, Israel EMAIL, EMAIL
Pseudocode Yes Algorithm 1 Seemingly Random Patrol
Open Source Code No The paper states 'We have fully implemented the deception mechanisms' but does not provide a link or explicit statement about the code being open-source or publicly available.
Open Datasets No The paper describes a theoretical perimeter setup ('P into N identical time segments') and does not refer to a publicly available dataset with concrete access information.
Dataset Splits No The paper does not mention specific dataset split information (percentages, counts, or standard splits) as it does not use a pre-existing dataset.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, libraries, or solvers).
Experiment Setup No The paper describes the algorithmic logic and models but does not provide specific experimental setup details such as hyperparameter values, optimizer settings, or training schedules.