Learning to Perform Local Rewriting for Combinatorial Optimization

Authors: Xinyun Chen, Yuandong Tian

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present the evaluation results in this section. To calculate the inference time, we run all algorithms on the same server equipped with 2 Quadro GP100 GPUs and 80 CPU cores. Only 1 GPU is used when evaluating neural networks, and 4 CPU cores are used for search algorithms. We set the timeout of search algorithms to be 10 seconds per instance. All neural networks in our evaluation are implemented in Py Torch [38].
Researcher Affiliation Collaboration Xinyun Chen UC Berkeley xinyun.chen@berkeley.edu Yuandong Tian Facebook AI Research yuandong@fb.com
Pseudocode No The paper describes its methodology in text and diagrams (Figure 1), but does not provide structured pseudocode or algorithm blocks.
Open Source Code Yes The code is available at https://github.com/facebookresearch/neural-rewriter.
Open Datasets No For expression simplification, the dataset was constructed by generating random pipelines using the generator in Halide. For job scheduling, 100K job sequences were randomly generated. For vehicle routing, problems were randomly generated. No specific link, DOI, or citation to these generated datasets being publicly available is provided.
Dataset Splits Yes For expression simplification, data was split into 8/1/1 for training/validation/test sets respectively. For job scheduling, 80K/10K/10K were used for training, validation and testing.
Hardware Specification Yes We run all algorithms on the same server equipped with 2 Quadro GP100 GPUs and 80 CPU cores.
Software Dependencies No All neural networks in our evaluation are implemented in Py Torch [38]. No specific version number for PyTorch is provided.
Experiment Setup Yes To calculate the inference time, we run all algorithms on the same server equipped with 2 Quadro GP100 GPUs and 80 CPU cores. Only 1 GPU is used when evaluating neural networks, and 4 CPU cores are used for search algorithms. We set the timeout of search algorithms to be 10 seconds per instance. All neural networks in our evaluation are implemented in Py Torch [38]. For dataset splits, 8/1/1 for training/validation/test sets were used for expression simplification, and 80K/10K/10K for job scheduling.