LinNet: Linear Network for Efficient Point Cloud Representation Learning

Authors: Hao Deng, Kunlei Jing, Shengmei Chen, Cheng Liu, Jiawei Ru, Bo Jiang, Lin Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lin Net, as a purely point-based method, outperforms most previous methods in both indoor and outdoor scenes without any extra attention, and sparse convolution but merely relying on a simple MLP. It achieves the m Io U of 73.7%, 81.4%, and 69.1% on the S3DIS Area5, Nu Scenes, and Semantic KITTI validation benchmarks, respectively... and 4 Experiments To validate the effectiveness of Lin Net, we conduct experiments in 3D semantic segmentation and 3D object classification tasks. We also conduct an extensive ablation study to analyze each component in Lin Net.
Researcher Affiliation Academia Hao Deng1, 2 Kunlei Jing3,4 Shengmei Cheng1, 2 Cheng Liu1, 2 Jiawei Ru1, 2 Jiang Bo1, 2 Lin Wang1, 2 1State-Province Joint Engineering and Research Center of Advanced Networking and Intelligent Information Services, School of Information Science and Technology, Northwest University 2Shaanxi Key Laboratory of Higher Education Institution of Generative Artificial Intelligence and Mixed Reality 3School of Software Engineering, Xi an Jiaotong University 4Department of Computing, The Hong Kong Polytechnic University
Pseudocode No The paper describes algorithms (DSA module, point searching strategy) but does not present them in a pseudocode block or a clearly labeled Algorithm box.
Open Source Code Yes Our code will be available at https://github.com/Deng H293/Lin Net.
Open Datasets Yes S3DIS [49] (Stanford Large-Scale 3D Indoor Spaces), Nu Scenes [50] dataset, Scan Object NN [53] and Model Net40 [54] datasets, Semantic KITTI [55] dataset.
Dataset Splits Yes We adhered to the official segmentation protocol, allocating 700 scenes for training, 150 for validation, and another 150 for testing, ensuring a balanced and comprehensive evaluation of our model s performance across varied scenes. and S3DIS 6-fold cross-validation. To evaluate the generalization capabilities, we perform 6-fold crossvalidation on the S3DIS dataset to ensure a robust assessment of our model s performance across different subsets of data.
Hardware Specification Yes The latency of each network is measured on a single Nvidia 3090 GPU, taking a batch of 80k points. and The experiments are performed on an RTX 3090. and GPU: Nvidia RTX 4090D 4 CPU: AMD EPYC 9754 128-Core
Software Dependencies Yes CUDA version: 11.3 Py Torch version: 1.12.1
Experiment Setup Yes The specific model training settings are shown in Tab. 8 and Tab. 9. We used cross-entropy loss in all experiments. (Table 9 provides specific values for Epoch, Learning Rate, Weight Decay, Scheduler, Optimizer, Batch Size for various datasets).