Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Combinatorial Optimization and Reasoning with Graph Neural Networks

Authors: Quentin Cappart, Didier Chételat, Elias B. Khalil, Andrea Lodi, Christopher Morris, Petar Veličković

JMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper presents a conceptual review of recent key advancements in this emerging field, aiming at optimization and machine learning researchers. [...] We give an overview of recent advances in using GNNs in the context of CO, aiming at both CO and machine learning researchers. We discuss challenges arising from the use of GNNs and future work.
Researcher Affiliation Collaboration Quentin Cappart EMAIL Department of Computer Engineering and Software Engineering Polytechnique Montr eal Montr eal, Canada Didier Ch etelat EMAIL CERC in Data Science for Real-Time Decision-Making Polytechnique Montr eal Montr eal, Canada Elias B. Khalil EMAIL Department of Mechanical & Industrial Engineering University of Toronto Toronto, Canada Andrea Lodi EMAIL Jacobs Technion-Cornell Institute Cornell Tech and Technion IIT New York, USA Christopher Morris EMAIL Department of Computer Science RWTH Aachen University Aachen, Germany Petar Veliˇckovi c EMAIL Deep Mind London, UK
Pseudocode No The paper describes algorithms and concepts using natural language, mathematical formulas (e.g., Equation 1, 2, 3, 4), and figures (e.g., Figure 2 for GNN aggregation, Figure 4 for algorithmic alignment), but does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor structured code-like procedures.
Open Source Code No The paper is a survey and review of existing methods and does not present a new methodology requiring source code release. Section 5, 'Implementation Frameworks', discusses various open-source libraries and frameworks (e.g., Py Torch Geometric, Deep Graph Library, OR-Gym, Open Graph Gym, MIPLearn, Ecole, Sea Pearl, CLRS-30) that are relevant to the field, but these are third-party tools, not code released by the authors for this specific paper's content.
Open Datasets No As a survey paper, this work does not conduct its own experiments and therefore does not use specific datasets for its own methodology. While it mentions various datasets (e.g., MIPLIB, CIFAR-10, MNIST, ImageNet) in the context of discussing other researchers' work, it does not provide access information for a dataset used in *its own* experiments.
Dataset Splits No This paper is a survey and review, not an experimental paper. It does not present new empirical work or experiments that would require specific training/test/validation dataset splits. Therefore, no such information is provided.
Hardware Specification No This paper is a survey and review and does not describe new experimental work conducted by the authors. Consequently, there are no mentions of specific hardware specifications (e.g., GPU models, CPU types, cloud resources) used for running experiments.
Software Dependencies No This paper is a survey and review of existing methods and does not implement or run new software for its own methodology. Section 5, 'Implementation Frameworks', lists several software libraries and frameworks relevant to the field (e.g., Py Torch Geometric, Deep Graph Library, CPLEX, Gecode, Choco, MIPLearn, Ecole, Sea Pearl, CLRS-30), but these are third-party tools used by other researchers, not specific dependencies for the work presented in this paper.
Experiment Setup No This paper is a survey and review of existing methods, not an experimental paper presenting new results. Therefore, it does not include specific experimental setup details such as hyperparameter values, model initialization, or training schedules for its own work.