MaskPlace: Fast Chip Placement via Reinforced Visual Representation Learning

Authors: Yao Lai, Yao Mu, Ping Luo

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental extensive experiments on many public benchmarks show that Mask Place outperforms existing RL approaches in all key performance metrics
Researcher Affiliation Academia Yao Lai Yao Mu Ping Luo Department of Computer Science The University of Hong Kong {ylai,ymu,pluo}@cs.hku.hk
Pseudocode No The paper describes algorithms in Appendix A.3 but does not present them in a formal pseudocode or algorithm block.
Open Source Code Yes The deliverables are released at laiyao1.github.io/maskplace.
Open Datasets Yes We evaluate Mask Place in 24 circuit benchmarks selected from public datasets including the widely-used ISPD2005 [30], IBM benchmark suite [31], and Ariane RISC-V CPU design [32].
Dataset Splits No The paper evaluates methods on public benchmarks but does not specify explicit training, validation, or test dataset splits (e.g., percentages or counts) or refer to predefined splits used for validation.
Hardware Specification Yes All of them are evaluated on one Ge Force RTX 3090 GPU, and the CPU version of DREAMPlace is allocated with 16 threads in a 16 CPU cores environment.
Software Dependencies No The paper mentions the PPO2 framework but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The detailed network architectures are provided in Appendix A.4. The detailed training setup is provided in Appendix A.5.