Representation-Agnostic Shape Fields
Authors: Xiaoyang Huang, Jiancheng Yang, Yanjun Wang, Ziyu Chen, Linguo Li, Teng Li, Bingbing Ni, Wenjun Zhang
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on diverse 3D representation formats, networks and applications, validate the universal effectiveness of the proposed RASF. |
| Researcher Affiliation | Academia | 1School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University 2Anhui University |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and pre-trained models are publicly available1. 1https://github.com/seanywang0408/RASF |
| Open Datasets | Yes | Model Net10 (Wu et al., 2015) 3,991 908 Point Cloud & Voxel 3D Objects (Rigid) Classification ACC; Model Net40 (Wu et al., 2015) 9,843 2,468 Point Cloud & Voxel 3D Objects (Rigid) Classification ACC; Shape Net Part (Yi et al., 2016) 12,137 2,874 Point Cloud 3D Objects (Rigid) Part Segmentation m IOU; S3DIS (Armeni et al., 2016) 6-Fold Cross-Val Point Cloud Indoor Scene (Rigid) Semantic Segmentation m IOU; SHREC10 (Lian et al., 2011) 300 300 Mesh 3D Objects (Rigid & Non-Rigid) Classification ACC; SHREC16 (Lian et al., 2011) 480 120 Mesh 3D Objects (Rigid & Non-Rigid) Classification ACC; HUMAN (Maron et al., 2017) 370 18 Mesh Human Bodies (Non-Rigid) Part Segmentation m IOU |
| Dataset Splits | Yes | Table 2: Datasets used in this study, with their train-test splits, data representation formats, descriptions and tasks. ... S3DIS (Armeni et al., 2016) 6-Fold Cross-Val... Appendix A.1 DETAILS OF PRE-TRAINING RASF ...Shape Net Part (Yi et al., 2016). It includes training samples of 12, 137 and testing samples of 2, 847. |
| Hardware Specification | Yes | All the results are measured on an RTX 2080 Ti. |
| Software Dependencies | No | The paper mentions PyTorch and a Python library (visvis) but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | For classification, we train the model for 200 epochs using Adam (Kingma & Ba, 2014) optimizer with an initial learning rate of 0.002 and linearly decrease the learning rate to 0 from the 100th epoch. For segmentation, the number of epochs is 600 while the initial learning rate is set to 0.002. ... The training lasts for 150 epochs, with an initial learning rate of 0.001 using Adam. We decay the learning rate by 0.2 for every 50 epochs. |