AlphaZero-based Proof Cost Network to Aid Game Solving

Authors: Ti-Rong Wu, Chung-Chin Shih, Ting Han Wei, Meng-Yu Tsai, Wei-Yuan Hsu, I-Chen Wu

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on solving 15x15 Gomoku and 9x9 Killall-Go problems with both MCTS-based and focused depth-first proof number search solvers. Comparisons between using Alpha Zero networks and PCN as heuristics show that PCN can solve more problems.
Researcher Affiliation Academia 1Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan 2Research Center for Information Technology Innovation, Academia Sinica, Taiwan 3Department of Computing Science, University of Alberta, Edmonton, Canada
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes All experiments can be reproduced by following the instructions in the README file on https://github.com/kds285/proof-cost-network, including source code, problem sets, MCTS and FDFPN solvers, and the trained models used in this paper.
Open Datasets Yes All experiments can be reproduced by following the instructions in the README file on https://github.com/kds285/proof-cost-network, including source code, problem sets, MCTS and FDFPN solvers, and the trained models used in this paper. For 15x15 Gomoku, we choose Yixin (Sun, 2018)...For 9x9 Killall-Go, since there are no open-source 9x9 Killall-Go programs, we simply use α0 and PCN-bmax to generate self-play games.
Dataset Splits No The paper mentions collecting data via self-play for training and using generated problem sets for evaluation. However, it does not specify explicit train/validation/test splits of these datasets for network training, nor does it specify any cross-validation setup.
Hardware Specification Yes We use 1080Ti GPUs for training, where the network is implemented with Py Torch (Paszke et al., 2019)... Each solver runs with one CPU and one NVIDIA Tesla V100.
Software Dependencies No The paper mentions 'implemented with Py Torch (Paszke et al., 2019)' but does not provide specific version numbers for PyTorch or any other software libraries or solvers.
Experiment Setup Yes We run 400 MCTS simulations for each move during self-play, for a total of 1,500,000 games, and the network is optimized every 5,000 games. The network contains 5 residual blocks with 64 filters, is optimized by SGD with 0.9 for momentum, 1e-4 for weight decay, and a fixed learning rate of 0.02.