AsymDPOP: Complete Inference for Asymmetric Distributed Constraint Optimization Problems
Authors: Yanchen Deng, Ziyu Chen, Dingding Chen, Wenxin Zhang, Xingqiong Jiang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The empirical evaluation indicates that Asym DPOP significantly outperforms the state-of-the-art, as well as the vanilla DPOP with PEAV formulation. |
| Researcher Affiliation | Academia | Yanchen Deng , Ziyu Chen , Dingding Chen , Wenxin Zhang and Xingqiong Jiang College of Computer Science, Chongqing University dyc941126@126.com, {chenziyu,dingding}@cqu.edu.cn, wenxinzhang18@163.com, jxq@cqu.edu.cn |
| Pseudocode | Yes | Algorithm 1: GNLE for ai; Algorithm 2: Value Propagation for ai |
| Open Source Code | No | The paper does not provide a statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | For each experiment, we generate 50 random instances and report the medians as the results. The paper states that it generates its own random instances but provides no information on how to access these instances or replicate their generation process, nor does it refer to a publicly available dataset. |
| Dataset Splits | No | The paper states 'For each experiment, we generate 50 random instances' but does not specify any training, validation, or test splits for these instances or any dataset. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as programming languages or library versions, used for the experiments. |
| Experiment Setup | No | The paper discusses parameters of its proposed algorithm (kp, ke) and problem characteristics (density, agent number) but does not provide details typical of an experimental setup, such as learning rates, optimizer settings, or other training hyperparameters. |