SeeA*: Efficient Exploration-Enhanced A* Search by Selective Sampling
Authors: Dengwei Zhao, Shikui Tu, Lei Xu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on retrosynthetic planning in organic chemistry, logic synthesis in integrated circuit design, and the classical Sokoban game empirically demonstrate the efficiency of See A , in comparison with the state-of-the-art heuristic search algorithms. |
| Researcher Affiliation | Academia | Dengwei Zhao1, Shikui Tu1 , Lei Xu1,2 1Department of Computer Science and Engineering, Shanghai Jiao Tong University 2Guangdong Institute of Intelligence Science and Technology |
| Pseudocode | Yes | Appendix A is titled 'Pseudocodes for See A and its sampling strategies'. It includes 'Algorithm 1: See A search algorithm', 'Algorithm 2: Uniform sampling strategy', 'Algorithm 3: Competitive clustering sampling strategy', 'Algorithm 4: UCT-like sampling strategy', and 'Algorithm 5: Node expanding for competitive clustering'. |
| Open Source Code | Yes | The source code is available at https://github.com/CMACH508/SEEA_star. |
| Open Datasets | Yes | Experiments are conducted on the widely-used USPTO benchmark, comprising 190 molecules [6]. ...the training dataset consists of 15 circuits, while the test dataset comprises 12 MCNC circuits denoted as {C1 C12} [63]... The first 50000 training problems and the 1000 test problems are collected from Boxoban [23]. |
| Dataset Splits | No | The paper describes training and test datasets but does not explicitly provide details regarding validation splits (e.g., percentages, sample counts, or k-fold cross-validation setup). |
| Hardware Specification | Yes | All experiments are conducted using NVIDIA Tesla V100 GPUs and an Intel(R) Xeon(R) Gold 6238R CPU. |
| Software Dependencies | No | The paper mentions software like 'rdchiral package', 'RDKit', and 'ABC' without specifying their version numbers. |
| Experiment Setup | Yes | The paper provides specific experimental setup details, such as: 'maximum of 500 single-step model calls, or 10 minutes of real-time', 'size of the candidate set is set to K = 50', 'parameter η is set to 0.15, and the number of clusters is 5', 'parameter cb is set to 0.35'. For logic synthesis, it mentions 'sequence length is fixed at 10', and 'Adam optimizer is employed to update the parameter with a 0.0001 learning rate'. |