All Points Matter: Entropy-Regularized Distribution Alignment for Weakly-supervised 3D Segmentation
Authors: Liyao Tang, Zhe Chen, Shanshan Zhao, Chaoyue Wang, Dacheng Tao
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the effectiveness through extensive experiments on various baselines and large-scale datasets. Results show that ERDA effectively enables the effective usage of all unlabeled data points for learning and achieves state-of-the-art performance under different settings. |
| Researcher Affiliation | Academia | 1 The University of Sydney, Australia 2 La Trobe University, Australia |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | Code and model will be made publicly available at https://github.com/Liyao Tang/ERDA. Our code and pre-trained models will be released. |
| Open Datasets | Yes | We experiment with multiple large-scale datasets, including S3DIS [2], Scan Net [16], Sensat Urban [33] and Pascal [19]. |
| Dataset Splits | Yes | For a fair comparison, we follow previous works [94, 95, 32] and experiment with different settings, including the 0.02% (1pt), 1% and 10% settings, where the available labels are randomly sampled according to the ratio3. We also conduct the 6-fold cross-validation, as reported in Tab. 3. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running experiments were provided in the paper. |
| Software Dependencies | No | No specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, PyTorch 1.9, CUDA 11.1) needed to replicate the experiment were provided in the paper. |
| Experiment Setup | Yes | we use 2-layer MLPs for the projection network g and set m = 0.999. For training, we follow the setup of the baselines and set the loss weight α = 0.1. |