Enhancing Network by Reinforcement Learning and Neural Confined Local Search

Authors: Qifu Hu, Ruyang Li, Qi Deng, Yaqian Zhao, Rengang Li

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on synthetic and real networks to verify the ability of our models. In the experiment, we compare our models with heuristics and NEP-DQN on synthetic and real networks using two at-tack algorithms. We evaluate the ID and OOD generalization ability on synthetic networks, and the active search [Bello et al., 2016] on real networks.
Researcher Affiliation Industry Qifu Hu , Ruyang Li , Qi Deng , Yaqian Zhao and Rengang Li Inspur Electronic Information Industry Co., Ltd {huqifu, liruyang, dengqi01, zhaoyaqian}@inspur.com, lirengang.hsslab@gmail.com
Pseudocode No The paper describes the proposed models and processes using mathematical equations and descriptive text, but it does not include a clearly labeled pseudocode block or algorithm.
Open Source Code Yes All source code is available at this github repository.
Open Datasets Yes We use synthetic networks generated from graph models and real-world networks. The first model is Barab asi-Albert (BA) model [Barab asi and Albert, 1999]. The second model is Erd os-R enyi (ER) model [Erdos et al., 1960]. All real-world networks are available at this URL.
Dataset Splits Yes We generate a training set, a validation set, an ID test set, and two OOD test sets for each network model. The training set consists of 214 networks, while both the validation set and the test set consist of 27 networks. We then choose the pretrained model with the best validation performance for each combination and report the best performance achieved by the pre-trained models for each OOD test set.
Hardware Specification Yes The experiments are conducted on a server (56-core Intel Xeon Gold 6348 CPU 2.60GHz, 1T RAM, 8 NVIDIA A100 GPUs) under the Python 3.9.12 and Py Torch 1.12.1 environments.
Software Dependencies Yes The experiments are conducted on a server (56-core Intel Xeon Gold 6348 CPU 2.60GHz, 1T RAM, 8 NVIDIA A100 GPUs) under the Python 3.9.12 and Py Torch 1.12.1 environments.
Experiment Setup Yes For NEP-AM and NEP-HAM, we set the initial embedding dimension dhi = 32, the number of transformer layers o = 3. In MHA, the number of heads is set as H = 8, the key and value dimensions are set to dk = dv = dhi/H. The number of hidden neurons in FF is set as 512. The C in (5) is set to 10. Models are trained using Adam optimizer; the batch size is fixed at 128, and the learning rate is fixed at 1e 4. The training steps are set as 400k, 300k, and 200k for τ = 1%, 2.5%, 5%, respectively.