Generative 3D Part Assembly via Dynamic Graph Learning

Authors: jialei huang, Guanqi Zhan, Qingnan Fan, Kaichun Mo, Lin Shao, Baoquan Chen, Leonidas J. Guibas, Hao Dong

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments and quantitative comparisons to three strong baseline methods, demonstrating the effectiveness of the proposed approach.
Researcher Affiliation Academia Jialei Huang Peking University [...] Guanqi Zhan Peking University [...] Qingnan Fan Stanford University [...] Kaichun Mo Stanford University [...] Lin Shao Stanford University [...] Baoquan Chen CFCS, CS Dept., Peking University AIIT, Peking University [...] Leonidas Guibas Stanford University [...] Hao Dong CFCS, CS Dept., Peking University AIIT, Peking University Peng Cheng Laboratory
Pseudocode No The paper describes its model and mathematical formulations but does not include a structured pseudocode block or an algorithm section.
Open Source Code No The paper does not provide any explicit statements about open-source code availability or links to a code repository for the methodology.
Open Datasets Yes We leverage the recent Part Net [24], a large-scale shape dataset with fine-grained and hierarchical part segmentations, for both training and evaluation. We use the three largest categories, chairs, tables and lamps, and adopt its default train/validation/test splits in the dataset.
Dataset Splits Yes We use the three largest categories, chairs, tables and lamps, and adopt its default train/validation/test splits in the dataset.
Hardware Specification No The paper mentions 'GPU supports' in the Acknowledgement section but does not specify any particular GPU models, CPU types, or other detailed hardware specifications used for experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as programming language versions or library versions (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes In practice, we sample 5 particles of zj to approximate Eq. (9). [...] The L is implemented as a weighted combination of both local part and global shape losses, detailed as below. Each part pose qi can be decomposed into rotation ri and translation ti. We supervise the translation via an L2 loss, [...] The rotation is supervised via Chamfer distance on the rotated part point cloud [...] we also supervise the full shape assembly S using Chamfer distance (CD) [...]. Our iterative graph neural network runs for 5 iterations [...]. We use Furthest Point Sampling (FPS) to sample 1,000 points for each part point cloud.