Solving Distributed Constraint Optimization Problems Using Logic Programming

Authors: Tiep Le, Tran Son, Enrico Pontelli, William Yeoh

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement two versions of the ASP-DPOP algorithm one that uses ground programs, which we call ASP-DPOP (facts), and one that uses non-ground programs, which we call ASP-DPOP (rules). In addition, we also implemented a variant of H-DPOP called PH-DPOP, which stands for Privacy-based H-DPOP, that restricts the amount of information that each agent can access to the amount common in most DCOP algorithms including ASP-DPOP. Specifically, agents in PH-DPOP can only access their own constraints and, unlike H-DPOP, cannot access their neighboring agents constraints. In our experiments, we compare both versions of ASP-DPOP against DPOP (Petcu and Faltings 2005), H-DPOP (Kumar, Petcu, and Faltings 2008), and PHDPOP.
Researcher Affiliation Academia Tiep Le, Tran Cao Son, Enrico Pontelli, William Yeoh Department of Computer Science New Mexico State University {tile, tson, epontell, wyeoh}@cs.nmsu.edu
Pseudocode No The paper describes the DPOP and ASP-DPOP algorithms in prose, detailing their phases and functionalities, but it does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that the source code for ASP-DPOP (the methodology described in this paper) is publicly available or provide a link to it. It only mentions using a publicly-available implementation of DPOP and an implementation of H-DPOP provided by the authors.
Open Datasets Yes We conduct our experiments on random graphs (Erd os and R enyi 1959), where we systematically vary domain-independent parameters, and on comprehensive optimization problems in power networks (Gupta et al. 2013).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing. It describes varying parameters for generating random graphs and using power network topologies as experimental instances, but not in terms of train/validation/test splits.
Hardware Specification Yes All experiments are performed on a Quadcore 3.4GHz machine with 16GB of memory.
Software Dependencies No The paper mentions using 'CLASP (Gebser et al. 2007) with its companion grounder GRINGO' and 'the Linda facilities offered by SICStus c Prolog for communication' but does not provide specific version numbers for these software components.
Experiment Setup Yes For each experiment, we vary only one parameter and fix the rest to their default values: |A| = 5, |X| = 15, |Di| = 6, p1 = 0.6, p2 = 0.6.