Exploiting the Structure of Distributed Constraint Optimization Problems

Authors: Ferdinando Fioretto

AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These results validate our hypothesis that one can exploit the information encoded in the DCOP model through the use of centralized solutions. We show that such form of consistency enforces a more effective pruning than those based on domain-consistency, leading enhanced efficiency and scalability. The use of centralized solutions within each agent, allows us to speed up several DCOP algorithms by up to several orders of magnitude, while the knowledge acquired from the DCOP model allows us to reduce the algorithms communication requirements, when compared to existing pre-processing techniques which ignore the structural information dictated by the model. For instance, in (Fioretto et al. 2014a), we suggest to solve independent local problems, in parallel, harnessing the multitude of computational units offered by GPGPUs, which led to significant improvements in the runtime of the algorithm resolution.
Researcher Affiliation Academia Ferdinando Fioretto Dept. of Computer Science New Mexico State University, NM 88001, USA Dept. of Mathematics & Computer Science University of Udine, UD 33100, IT ffiorett@cs.nmsu.edu
Pseudocode No The paper does not contain any explicitly labeled pseudocode blocks or algorithm listings.
Open Source Code No The paper does not include any explicit statements about releasing source code for the described methodology, nor does it provide links to a code repository.
Open Datasets No The paper mentions applying techniques to 'real-world problems' and 'smart grid domains' but does not specify any publicly available datasets or provide access information (links, DOIs, formal citations with authors/year) for any data used in experiments.
Dataset Splits No The paper discusses improvements in efficiency and scalability but does not provide specific details on how data was split into training, validation, and test sets or refer to predefined splits from known benchmarks.
Hardware Specification No The paper mentions 'battery-powered devices' and 'GPGPUs' in a general context, and 'multitude of computational units offered by GPGPUs', but does not specify any exact models of GPUs, CPUs, or other hardware used for running experiments.
Software Dependencies No The paper mentions the 'Mini Zinc language (Nethercote et al. 2007)' and alludes to a 'DCOP language' it proposes, and 'agents centralized solvers', but does not list specific software dependencies with version numbers that would be necessary to replicate its experiments (e.g., specific library versions, solver versions).
Experiment Setup No The paper discusses various techniques and models but does not provide specific details about the experimental setup such as hyperparameter values (e.g., learning rates, batch sizes), optimizer settings, or training configurations.