Learning to Assemble Geometric Shapes

Authors: Jinhwi Lee, Jungtaek Kim, Hyunsoo Chung, Jaesik Park, Minsu Cho

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness on shape assembly tasks with various scenarios, including the ones with abnormal fragments (e.g., missing and distorted), the different number of fragments, and different rotation discretization. We show that the qualitative and quantitative results comparing to the three baselines for four target geometric shapes. We also provide studies on interesting assembly scenarios. Details of Experiments. As shown in Figure 1, we evaluate our method and existing methods on the following target objects: Square, Mondrian-Square, Pentagon, and Hexagon. Unless specified otherwise, we use 5,000 samples each of them is partitioned into 8 fragments using binary space partitioning and the number of rotation angle bins are set to 1, 4, or 20. We use 64%, 16%, and 20% of the samples for training, validation, and test splits, respectively.
Researcher Affiliation Collaboration Jinhwi Lee1,2 , Jungtaek Kim1 , Hyunsoo Chung1 , Jaesik Park1 and Minsu Cho1 1Pohang University of Science and Technology (POSTECH) 2POSCO {jinhwi, jtkim, hschung2, jaesik.park, mscho}@postech.ac.kr
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Supplementary material and implementations are available at https://github.com/POSTECH-CVLab/LAGS.
Open Datasets No We create a dataset by partitioning a shape into multiple fragments, which can be easily used to pose its inverse task, i.e., an assembly problem. Inspired by binary space partitioning algorithm [Schumacher et al., 1969], we randomly split a target shape, create a set of random fragments for each target shape, choose the order of fragments, and rotate them at random. The paper describes creating its own dataset but does not provide concrete access information (link, DOI, specific citation to where *their* dataset can be accessed).
Dataset Splits Yes We use 64%, 16%, and 20% of the samples for training, validation, and test splits, respectively.
Hardware Specification No The paper does not mention any specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency versions (e.g., library names with version numbers like PyTorch 1.9, TensorFlow 2.x).
Experiment Setup Yes Unless specified otherwise, we use 5,000 samples each of them is partitioned into 8 fragments using binary space partitioning and the number of rotation angle bins are set to 1, 4, or 20. We use 64%, 16%, and 20% of the samples for training, validation, and test splits, respectively.