Semantic segmentation of sparse irregular point clouds for leaf/wood discrimination

Authors: Yuchen BAI, Jean-Baptiste Durand, Grégoire Vincent, Florence Forbes

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our model outperforms state-of-the-art alternatives on UAV point clouds. ... We propose a novel end-to-end approach SOUL (Semantic segmentation On ULs) based on Point Net++ proposed by Qi et al. [8] to perform semantic segmentation on ULS data. ... The data set (Bai et al. [14]) used in the article is already available in open access at https://zenodo.org/record/8398853 and our code is available at https://github.com/Na1an/phd_mission. ... The results are summarized in Table 1, while Figure 5 highlights the performance within the tree canopies.
Researcher Affiliation Academia Univ. Grenoble Alpes, CNRS, Inria, Grenoble INP, LJK, Grenoble, France AMAP, Univ. Montpellier, CIRAD, CNRS, INRAE, IRD, Montpellier, France
Pseudocode Yes Algorithm 1 Geodesic Voxelization Decomposition (GVD)
Open Source Code Yes The data set (Bai et al. [14]) used in the article is already available in open access at https://zenodo.org/record/8398853 and our code is available at https://github.com/Na1an/phd_mission.
Open Datasets Yes The data set (Bai et al. [14]) used in the article is already available in open access at https://zenodo.org/record/8398853
Dataset Splits Yes To facilitate model training and quantitative analysis, the labeled ULS data was partitioned into training, validation, and test sets based on tree ID, with 221 trees in the training data set, 20 in the validation data set, and 40 in the test data set.
Hardware Specification No The paper states "lower GPU requirements" when discussing Point Net++ selection but does not provide specific hardware details (e.g., GPU models, CPU types, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software components like 'Adam optimization algorithm', 'Nesterov SGD', 'ELU activation function', 'Re LU', and 'Py Torch' but does not specify their version numbers.
Experiment Setup Yes The Adam optimization algorithm is chosen as the optimizer and the learning rate is 1e-7. ... until reaching a final batch size of 128 ... typically exceeding 3,000 epochs. ... The fraction of the input units to drop for dropout layer was changed from 0.5 to 0.3...