Primal-Dual Mesh Convolutional Neural Networks

Authors: Francesco Milano, Antonio Loquercio, Antoni Rosinol, Davide Scaramuzza, Luca Carlone

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate PD-Mesh Net on the tasks of mesh classification and mesh segmentation. On these tasks and on several datasets, we outperform start-of-the-art methods.
Researcher Affiliation Academia Francesco Milano ETH Zurich, Switzerland fmilano@student.ethz.ch Antonio Loquercio Robotics and Perception Group University of Zurich, Switzerland loquercio@ifi.uzh.ch Antoni Rosinol SPARK Lab MIT, USA arosinol@mit.edu Davide Scaramuzza Robotics and Perception Group University of Zurich, Switzerland sdavide@ifi.uzh.ch Luca Carlone SPARK Lab MIT, USA lcarlone@mit.edu
Pseudocode No The paper describes its methods through prose and diagrams but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available at https://github.com/MIT-SPARK/PD-Mesh Net.
Open Datasets Yes SHREC dataset [48], Cube Engraving dataset released by [12], COSEG [56] and Human Body [57].
Dataset Splits No The paper provides detailed training and testing splits for the datasets but does not explicitly mention a separate validation split. For example, for the SHREC dataset: “split 16 where for each class 16 samples are used for training and 4 for testing and split 10 in which the samples of each class are subdivided equally between training and the test set.” and for Cube Engraving: “(170 training samples and 30 test samples)”. No explicit mention of validation data for hyperparameter tuning.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running its experiments (e.g., CPU/GPU models, memory, or cloud instances).
Software Dependencies No The paper mentions software like “Py Torch [46]” and “Py Torch Geometric [47]” but does not provide specific version numbers for these dependencies.
Experiment Setup Yes In all the experiments we use Adam algorithm [45] for optimization. The network is trained using cross-entropy on the predicted labels. ... We limit the number of training epochs to 200... Every node of the resulting primal graph...is trained using cross-entropy loss for 1000 epochs. ...we generate 20 augmented versions of each training sample by randomly shifting the vertices along the edges.