Curriculum learning for multilevel budgeted combinatorial problems

Authors: Adel Nabli, Margarida Carvalho

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We report results close to optimality on graphs up to 100 nodes and a 185 speedup on average compared to the quickest exact solver known for the Multilevel Critical Node problem, a max-min-max trilevel problem that has been shown to be at least Σp 2-hard. ... Section 6 Computational Results
Researcher Affiliation Academia Adel Nabli Margarida Carvalho CIRRELT and Département d Informatique et de Recherche Opérationnelle Université de Montréal adel.nabli@umontreal.ca carvalho@iro.umontreal.ca
Pseudocode Yes See Appendix B.1 for the pseudo-code. ... Algorithm 1: Multi L-Cur ... The pseudo-code for the Greedy Rollout function is available in Appendix B.2 ... Its pseudo-code is available in Appendix B.3.
Open Source Code Yes We make our code publicly available: https://github.com/Adel Nabli/MCN
Open Datasets Yes MCN [1] along with a publicly available dataset of solved instances2, which we can use to assess the quality of our heuristic. ... 2https://github.com/mxmmargarida/Critical-Node-Problem ... The first distribution of instances considered is D(1), constituted of Erdos-Renyi graphs [18] with size |V |(1) [[10, 23]] and arc density d(1) [0.1, 0.2]. ... The second distribution of instances D(2) focused on larger graphs with |V |(2) [[20, 60]], d(2) [0.05, 0.15].
Dataset Splits No Algorithm 1 mentions 'Create Dj train, Dj val by sampling' and 'validation set' (Lj val) but does not provide specific details on the split percentages, absolute counts, or detailed splitting methodology.
Hardware Specification Yes To train our agent and at inference, we used one gpu of a cluster of NVidia V100SXM2 with 16G of memory4.
Software Dependencies Yes The architectures presented in Figure 2 was implemented with Pytorch Geometric [21] and Pytorch 1.4 [50].
Experiment Setup No The paper mentions using 'Adam [35]' as the optimizer but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed optimizer settings in the main text. It refers to Appendix D for 'Further details of the implementation', but such details are not explicitly present in the main body.