Neural Regret-Matching for Distributed Constraint Optimization Problems

Authors: Yanchen Deng, Runsheng Yu, Xinrun Wang, Bo An

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical evaluations indicate that our algorithm can scale up to large-scale DCOPs and significantly outperform the state-of-the-art methods.
Researcher Affiliation Academia School of Computer Science and Engineering, Nanyang Technological University, Singapore
Pseudocode Yes Technical proofs, pseudo codes and additional results are provided in the appendix, which can be found at https://personal.ntu.e du.sg/boan/papers/IJCAI21 Deep DCOP Appendix.pdf.
Open Source Code No Technical proofs, pseudo codes and additional results are provided in the appendix, which can be found at https://personal.ntu.e du.sg/boan/papers/IJCAI21 Deep DCOP Appendix.pdf. This link provides supplementary material in PDF format, not the open-source code for the methodology described.
Open Datasets No The paper describes how problem instances for 'random DCOPs', 'scale-free network problems', and 'sensor network problems' were generated (e.g., 'randomly establish a constraint', 'use Barab asi-Albert model to generate', 'costs are uniformly selected from [0,100]'), but it does not provide concrete access information (links, DOIs, or specific citations for pre-existing public datasets) for the data used in experiments.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or testing.
Hardware Specification Yes All experiments are conducted on an i7 octa-core workstation with 32 GB memory.
Software Dependencies No The paper does not provide specific ancillary software details, such as library or solver names with version numbers (e.g., Python 3.8, PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes We set δt = t0.45 and consider each estimator as a neural network with two hidden layers. Each hidden layer has 16 neurons and uses relu as the activation function. Each time the neural networks are trained by 2 steps of mini-batch stochastic gradient descent (SGD) with a batch size of 32. We use Adam optimizer [Kingma and Ba, 2014] with a learning rate of 2 10 3 to update parameters. Finally, we set difference budget b = 4, γ = 0.9 and the capacity of the memory to 5000 regret values.