Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Text to Point Cloud Localization with Multi-Level Negative Contrastive Learning
Authors: Dunqiang Liu, Shujun Huang, Wen Li, Siqi Shen, Cheng Wang
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on the KITTI360Pose benchmark demonstrate that our method outperforms better that the state-of-the-art methods. Specifically, we achieve a 56.3% improvement in Top-1 retrieval recall and a 45.9% improvement in 5m localization recall. |
| Researcher Affiliation | Academia | 1Fujian Key Laboratory of Sensing and Computing for Smart Cities, Xiamen University, China 2Key Laboratory of Multimedia Trusted Perception and Efficient Computing, Ministry of Education of China, School of Informatics, Xiamen University, China EMAIL, EMAIL |
| Pseudocode | No | The paper describes the methodology using textual explanations and mathematical formulations (e.g., equations 1-14) and a framework diagram (Figure 2), but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/dqliua/MNCL |
| Open Datasets | Yes | Experiments Benchmark Dataset We train and evaluate the proposed method on the KITTI360Pose benchmark(Kolmet et al. 2022). |
| Dataset Splits | Yes | Following (Kolmet et al. 2022), we sample the cells of size 30m with a stride of 10m. Five scenes are used for training (11.59km2), one for validation, and the remaining three for testing (2.14km2). |
| Hardware Specification | Yes | All the experiments are conducted on an NVIDIA RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions using "Adam W optimizer" but does not provide specific version numbers for any software libraries, programming languages (beyond implied Python for machine learning frameworks), or specialized solvers required to replicate the experiments. |
| Experiment Setup | Yes | For hyper-parameters, we set the number of hidden layers in GCNs to 3, and set the parameter α in the loss function to 2. We train our model 32 epochs with Adam W optimizer (Loshchilov and Hutter 2017). The learning rate is set to 1e-3 and decays by half every 5 epochs. |