Gauge Equivariant Transformer

Authors: Lingshen He, Yiming Dong, Yisen Wang, Dacheng Tao, Zhouchen Lin

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that GET achieves state-of-the-art performance on two common recognition tasks. Our model outperforms the HSN model on the SHREC dataset by 3.1% accuracy, and outperforms the Mesh CNN model on the Human Body Segmentation dataset by 0.3% accuracy with much fewer parameters, presenting state-of-the-art performance. We conduct extensive experiments to evaluate the effectiveness of our model. We test the performance of our model on two deformable domain tasks, and conduct parameter sensitivity analysis and several ablation studies to make a comprehensive evaluation.
Researcher Affiliation Collaboration Lingshen He1 Yiming Dong1 Yisen Wang1,2 Dacheng Tao4 Zhouchen Lin1,2,3 1Key Laboratory of Machine Perception (MOE), School of Artificial Intelligence, Peking University 2Institute for Artificial Intelligence, Peking University 3Pazhou Lab, Guangzhou 510330, 4JD Explore Academy, JD.com
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes We test the performance of our model on two deformable domain tasks... Our model outperforms the HSN model on the SHREC dataset by 3.1% accuracy... The dataset consists of 370 training models from MIT [50], FAUST [5], Adobe Fuse [1] and SCAPE [3] and 18 test models from SHREC07 [23].
Dataset Splits No The paper explicitly states the number of training and test models, but does not provide specific details about a validation split (e.g., percentage or sample count).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup No The details of the architecture and training settings are provided in supplementary materials. The readers may refer to supplementary materials for details of neural network architecture and hyperparameters.