Compressing Exact Cover Problems with Zero-suppressed Binary Decision Diagrams

Authors: Masaaki Nishino, Norihito Yasuda, Kengo Nakamura

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that our method is an order of magnitude faster when the problem is highly compressed. We compared our algorithm with DLX and DXZ [Nishino et al., 2017], both of which are implemented by Knuth2. We also compared with sharp SAT [Thurley, 2006], a state-of-the-art #SAT solver, and d4, a stateof-the-art deterministic decomposable negation normal form (d-DNNF) compiler [Lagniez and Marquis, 2017]. We implemented D3X in C++. All experiments were run on a Linux server with a 3.5-GHz Intel(R) Xeon(R) CPU and 1 TB RAM. We selected two types of exact cover problems for benchmark instances.
Researcher Affiliation Industry Masaaki Nishino , Norihito Yasuda , Kengo Nakamura NTT Communication Science Laboratories, NTT Corporation {masaaki.nishino.uh, norihito.yasuda.hn, kengo.nakamura.dx}@hco.ntt.co.jp
Pseudocode Yes Algorithm 1: Overview of DLX. Algorithm 2: Algorithm D3X. Algorithm 3: Cover ZDD and Uncover ZDD. Algorithm 4: Cover Lower and Uncover Lower. Algorithm 5: Cover Upper and Uncover Upper.
Open Source Code Yes Our code is available at https://github.com/nttcslab-alg/d3x
Open Datasets Yes We used benchmark graphs appearing in TSPLIB [Cook and Seymour, 2003], Rome Graph [Coudert et al., 2014] and Internet Topology Zoo [Knight et al., 2011]. We also ran experiments on a 4 x 4-grid graph.
Dataset Splits No The paper uses benchmark instances for exact cover problems and measures the performance of different algorithms. Exact cover problems typically do not involve 'training' or 'validation' splits in the machine learning sense, as the algorithms directly solve the given problem instances. Therefore, specific dataset splits for training or validation are not applicable or provided.
Hardware Specification Yes All experiments were run on a Linux server with a 3.5-GHz Intel(R) Xeon(R) CPU and 1 TB RAM.
Software Dependencies No The paper states, 'We implemented D3X in C++.' However, it does not provide specific version numbers for the C++ compiler, standard libraries, or any other third-party software dependencies used in their implementation.
Experiment Setup No The paper describes how the benchmark instances were generated and formulated (e.g., 'randomly select 30% of nodes and consider them as customers', 'min-fill tree decomposition heuristic'). However, it does not specify hyperparameters, optimization settings, or other system-level training configurations for their proposed D3X algorithm, as it is a search algorithm and not a machine learning model that undergoes a training phase with such parameters.