Symbol as Points: Panoptic Symbol Spotting via Point-based Representation

Authors: WENLONG LIU, Tianyu Yang, Yuhan Wang, Qizhi Yu, Lei Zhang

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approach, named Sym Point, is simple yet effective, outperforming recent state-of-the-art method GAT-CADNet by an absolute increase of 9.6% PQ and 10.4% RQ on the Floor Plan CAD dataset. We conduct extensive experiments on the Floor Plan CAD dataset and our Sym Point achieves 83.3% PQ and 91.1% RQ under the panoptic symbol spotting setting, which outperforms the recent state-of-the-art method GAT-CADNet (Zheng et al., 2022) with a large margin.
Researcher Affiliation Collaboration Wenlong Liu1, Tianyu Yang1, Yuhan Wang2, Qizhi Yu2 , Lei Zhang1 1International Digital Economy Academy (IDEA) 2Vanyi Tech
Pseudocode No The paper describes its methods and formulations mathematically and in narrative text, but it does not include any explicitly labeled "Pseudocode" or "Algorithm" blocks.
Open Source Code Yes The source code and models will be available at https://github. com/nicehuster/Sym Point.
Open Datasets Yes We present the experimental setting and benchmark results on the public CAD drawing dataset Floor Plan CAD (Fan et al., 2021). Floor Plan CAD dataset has 11,602 CAD drawings of various floor plans with segment-grained panoptic annotation and covering 30 things and 5 stuff classes.
Dataset Splits No The paper mentions using the "Floor Plan CAD dataset" and discusses its total size, and implicitly a "test split" (e.g., in Table 5 caption), but it does not provide explicit training, validation, and test dataset split percentages or sample counts for the main dataset.
Hardware Specification Yes We train the model for 1000 epochs with a batch size of 2 per GPU on 8 NVIDIA A100 GPUs.
Software Dependencies No The paper mentions using "Pytorch" for implementation, but it does not specify version numbers for PyTorch or any other software dependencies needed to replicate the experiment.
Experiment Setup Yes We choose Adam W (Loshchilov & Hutter, 2017) as the optimizer with a default weight decay of 0.001, the initial learning rate is 0.0001, and we train the model for 1000 epochs with a batch size of 2 per GPU on 8 NVIDIA A100 GPUs. In our experiments, we empirically set λBCE : λdice : λcls : λCCL = 5 : 5 : 2 : 8.