Self-Supervised Deep Learning on Point Clouds by Reconstructing Space
Authors: Jonathan Sauder, Bjarne Sievers
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experimentally, that pre-training with our method before supervised training improves the performance of state-of-the-art models and significantly improves sample efficiency. 4 Experiments |
| Researcher Affiliation | Academia | Jonathan Sauder Hasso Plattner Institute Potsdam, Germany jonathan.sauder@student.hpi.de Bjarne Sievers Hasso Plattner Institute Potsdam, Germany bjarne.sievers@student.hpi.de |
| Pseudocode | Yes | Algorithm 1: Generation of Self-Supervised Labels |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for their methodology is open-source or publicly available. |
| Open Datasets | Yes | Model Net dataset [35], Shape Net dataset [5], Shape Net Part dataset [38], Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset [2] |
| Dataset Splits | Yes | For this we use the standard train/test split, with the same uniform point sample as defined in [23] with Model Net40 on 40 classes containing 9843 train and 2468 test models and Model Net10 on ten classes containing 3991 and 909 models respectively. We use the official train / validation / test splits [38]. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments (e.g., specific GPU or CPU models, memory, or cloud computing instance types). |
| Software Dependencies | No | The paper mentions various software components and models (e.g., PointNet, DGCNN, SVM, t-SNE, UMAP) but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | While k may be varied across domains... we list all results with k = 3. Additional details are discussed in Section 5. randomly rotating 15% of the individual voxels and randomly replacing one voxel in each input point cloud with a random voxel from a randomly drawn input point cloud from the same dataset leads to a slightly higher quality of the embeddings in the object classification task (consistently around 0.2% SVM accuracy in the downstream object classification task), therefore we kept this setup throughout all experiments. ...pre-training a DGCNN in a self-supervised manner on the Shape Net dataset with 1024 points chosen randomly from each model for 100 epochs before fully supervised training on the Model Net40 dataset. |