Diverse Shape Completion via Style Modulated Generative Adversarial Networks

Authors: Wesley Khademi, Fuxin Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we evaluate our method against a variety of baselines on the task of multimodal shape completion and show superior quantitative and qualitative results across several synthetic and real datasets. We further conduct a series of ablations to justify the design choices of our method.
Researcher Affiliation Academia Wesley Khademi Oregon State University khademiw@oregonstate.edu Li Fuxin Oregon State University lif@oregonstate.edu
Pseudocode No The paper describes the method using diagrams and textual descriptions, but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of open-source code or a link to a code repository.
Open Datasets Yes We conduct experiments on several synthetic and real datasets. Following the setup of [8], we evaluate our approach on the Chair, Table, and Airplane categories of the 3D-EPN dataset [58]. Similarly, we also perform experiments on the Chair, Table, and Lamp categories from the Part Net dataset [59]. To evaluate our method on real scanned data, we conduct experiments on the Google Scanned Objects (GSO) dataset [60].
Dataset Splits No The paper mentions training, but does not explicitly provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or predefined split citations) for reproduction.
Hardware Specification Yes All models are trained on two NVIDIA Tesla V100 GPUs and take about 30 hours to train.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., Python, PyTorch, TensorFlow, specific libraries or solvers and their versions).
Experiment Setup Yes Implementation Details Our model takes in NP = 1024 points as partial input and produces N = 2048 points as a completion. For training the generator, the Adam optimizer is used with an initial learning rate of 1 10 4 and the learning rate is linearly decayed every 2 epochs with a decay rate of 0.98. For the discriminator, the Adam optimizer is used with a learning rate of 1 10 4. We train a separate model for each shape category and train each model for 300 epochs with a batch size of 56.