On Representing Linear Programs by Graph Neural Networks

Authors: Ziang Chen, Jialin Liu, Xinshang Wang, Wotao Yin

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate our results, we train a simple GNN and present its accuracy in mapping LPs to their feasibilities and solutions.
Researcher Affiliation Collaboration Ziang Chen Department of Mathematics, Duke University Durham, NC 27708 ziang@math.duke.edu Jialin Liu Damo Academy, Alibaba US Bellevue, WA 98004 jialin.liu@alibaba-inc.com
Pseudocode Yes Algorithm 1 The WL test for LP-Graphs2(denoted by WLLP)
Open Source Code Yes The codes are modified from Gasse et al. (2019) and can be found in https://github.com/liujl11git/GNN-LP.git.
Open Datasets No We generate each LP with the following way. We set m = 10 and n = 50. Each matrix A is sparse with 100 nonzero elements whose positions are sampled uniformly and values are sampled normally.
Dataset Splits No The paper mentions 'training set' and 'testing set' but does not specify a 'validation set' or 'validation split' for the experiments.
Hardware Specification Yes All the experiments are conducted on a Linux server with an Intel Xeon Platinum 8163 GPU and eight NVIDIA Tesla V100 GPUs.
Software Dependencies No The paper mentions software like 'Tensor Flow', 'scipy.optimize.linprog', and 'Adam', but it does not provide specific version numbers for these or other key software components used in the experiments.
Experiment Setup Yes We use Adam (Kingma & Ba, 2014) as our training optimizer with learning rate of 0.0003. ... We set L = 2 for all GNNs and those learnable functions f V in, f W in , fout, f W out, {f V l , f W l , g V l , g W l }L l=0 are all parameterized with MLPs.