Revisiting Immediate Duplicate Detection in External Memory Search

Authors: Shunji Lin, Alex Fukunaga

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate A*-IDD on domain-independent planning using International Planning Competition (IPC) benchmarks, and first show that segmented compression significantly outperforms compression. Then, we compare A*-IDD to External A* (Edelkamp, Jabbar, and Schr odl 2004) and A*-DDD (Korf 2004; Hatem 2014), two DDD-based A* algorithms, and show that A*-IDD is competitive with DDD-based approaches in domain-independent planning.
Researcher Affiliation Academia Shunji Lin, Alex Fukunaga Graduate School of Arts and Sciences The University of Tokyo
Pseudocode Yes Algorithm 1 shows the pseudocode for A*-IDD. ... Algorithm 2, described more fully below.
Open Source Code No The paper does not contain an unambiguous statement or link indicating that source code for the methodology described is publicly available.
Open Datasets Yes We used the consistent, abstraction-based merge-and-shrink heuristic (Helmert et al. 2014) with the recommended parameters. ... We selected candidate problems from the IPC optimal track domains (2008, 2011 and 2014), including only problems that exhaust 2 Gi B to 10 Gi B in RAM-based A* with eager duplicate detection.
Dataset Splits No The paper mentions using specific benchmark instances but does not provide explicit details on training, validation, and test splits (e.g., percentages, sample counts, or citations to predefined splits) to reproduce data partitioning.
Hardware Specification Yes We used a Xeon W3680 3.3 GHz CPU, with a RAM limit of 1.33 Gi B (which gives an available memory of 1.16 Gi B when excluding background processes), running Ubuntu 16.04.3 LTS. We used a 900 GB SATA 3.0 San Disk Ultra II consumer-grade TLC SSD.
Software Dependencies Yes All programs were implemented in C++ (g++ 5.4.0, C++11).
Experiment Setup Yes For A*-IDD, we used segmented compression (p = 100), with 950 Mi B for the internal hash table, in order to achieve an external hash table size that is big enough to fit the closed list of the largest problem. For the external mergesort operation in External A*, we used a chunk size of 950 Mi B... For A*-DDD, we used simple tabulation hashing for assigning nodes to files, with a hash value range of [1, 20].