Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Privacy Preserving Solution of DCOPs by Local Search
Authors: Shmuel Goldklang, Tal Grinshpoun, Tamir Tassa
IJCAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We implemented P-DSA and compared its performance to PMax-Sum [Tassa et al., 2017], which is a privacy-preserving implementation of an incomplete DCOP algorithm (Max Sum [Farinelli et al., 2008]). Experiments were conducted on a machine equipped with an Intel i5-10400 CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s), 16GB DDR4 RAM. The system ran Microsoft Windows 10 Pro, and the code was written in Java 1.8.0 using the Sin Algo simulation framework. |
| Researcher Affiliation | Academia | 1Department of Mathematics and Computer Science, The Open University of Israel 2Department of Industrial Engineering and Management, Ariel University EMAIL, EMAIL, EMAIL |
| Pseudocode | Yes | Algorithm 1: The DSA algorithm...Protocol 2: P-DSA Private DSA...Sub-protocol 3: Find Best Assignment |
| Open Source Code | Yes | The source code is available on https://github.com/dcop2025/dcop-sim/tree/main. |
| Open Datasets | No | In one set of experiments we used random constraint graphs, where each graph is a random graph of n nodes in which each pair of nodes is connected by an edge in probability d. In another set of experiments we generated scale-free random graphs [Barab asi and Albert, 1999] with an initial clique of size 5, and 4 backward edges for each additional node. In all experiments, each constraint matrix was a random m m matrix with entries that distribute uniformly on the interval [0, 10]. The paper describes generating its own random data based on parameters and references a paper for scale-free graphs, but does not provide a specific link or repository to an existing open dataset. |
| Dataset Splits | No | We selected a new random problem (where a problem consists of the constraint graph as well as the constraint matrices), ran both algorithms on the same problem, and evaluated the cost of their output after T = 1, 2, 3 minutes of execution. We repeated that experiment 20 times, and we report the average of the costs obtained by each of the two algorithms within each of the prescribed time frames. The paper describes generating random problem instances and repeating experiments, but does not mention standard training/test/validation splits for a fixed dataset. |
| Hardware Specification | Yes | Experiments were conducted on a machine equipped with an Intel i5-10400 CPU @ 2.90GHz, 2904 Mhz, 6 Core(s), 12 Logical Processor(s), 16GB DDR4 RAM. |
| Software Dependencies | Yes | The system ran Microsoft Windows 10 Pro, and the code was written in Java 1.8.0 using the Sin Algo simulation framework. |
| Experiment Setup | Yes | Number of agents n {10, 20, 30, 40, . . . , 100}. Domains size m {5, 10, 15, 20, 25}. Constraint density, d {0.2, 0.4, 0.6, 0.8, 1.0} the fraction of constrained pairs of variables out of all n 2 ... We evaluated the cost of their output after T = 1, 2, 3 minutes of execution. |