USCO-Solver: Solving Undetermined Stochastic Combinatorial Optimization Problems

Authors: Guangmo Tong

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In empirical studies, we demonstrate our design using proof-of-concept experiments, and compare it with other methods that are potentially applicable. Overall, we obtain highly encouraging experimental results for several classic combinatorial problems on both synthetic and real-world datasets.
Researcher Affiliation Academia Guangmo Tong Department of Computer and Information Sciences University of Delaware amotong@udel.edu
Pseudocode No The paper describes the steps of USCO-Solver in text (Step 1, Step 2, Step 3, Step 4) but does not provide a formally labeled pseudocode or algorithm block.
Open Source Code Yes In addition, we release the experimental materials, including source code, datasets, and pretrain models, as well as their instructions. The experiment materials can be found online1. 1https://github.com/cdslabamotong/USCO-Solver
Open Datasets Yes Col and NY are USA road networks complied from the benchmarks of DIMACS Implementation Challenge [27], and Kro is a Kronecker graph [28]. We construct two instances based on two real-world bipartite graphs Cora [38] and Yahoo [39].
Dataset Splits No For each considered method, we randomly select 160 training samples and 6400 testing samples from the pool. No explicit mention of a separate validation split or dataset.
Hardware Specification No The paper mentions that hardware information can be found in appendix 4, but this information is not present in the provided document. No specific GPU/CPU models or detailed hardware specifications are provided in the main text.
Software Dependencies No The paper mentions software like Pystruct [30] and refers to Naive Bayes and DSPN implementations, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes For each considered method, we randomly select 160 training samples and 6400 testing samples from the pool. The number of configurations (i.e., K) is enumerated from small to large, with different ranges for different datasets. We use the zero-one loss, and the training algorithm is implemented based on Pystruct [30].