Near-Linear Time Algorithm for the Chamfer Distance
Authors: Ainesh Bakshi, Piotr Indyk, Rajesh Jayaram, Sandeep Silwal, Erik Waingarten
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an empirical evaluation of our Chamfer distance estimation algorithm. Our experiments demonstrate the effectiveness of our algorithm for both low and high dimensional datasets and across different dataset sizes. |
| Researcher Affiliation | Collaboration | Ainesh Bakshi MIT ainesh@mit.edu Piotr Indyk MIT indyk@mit.edu Rajesh Jayaram Google Research rkjayaram@google.com Sandeep Silwal MIT silwal@mit.edu Erik Waingarten University of Pennsylvania ewaingar@seas.upenn.edu |
| Pseudocode | Yes | Figure 1: The Chamfer-Estimate Algorithm. Figure 2: The Crude NN Algorithm. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its own source code or a direct link to a code repository for the methodology described. |
| Open Datasets | Yes | Dataset |A|, |B| d Experiment Metric Reference Shape Net 8 103, 8 103 3 Small Scale 1 [10] Text Embeddings 2.5 103, 1.8 103 300 Small Scale 1 [16] Gaussian Points 5 104, 5 104 2 Outliers 1 DEEP1B 104, 109 96 Large Scale 2 [9] Microsoft-Turing 105, 109 100 Large Scale 2 [21] |
| Dataset Splits | No | The paper discusses evaluating performance using 'relative error' and 'sample complexity', and mentions datasets used for testing, but it does not provide explicit details on training/validation/test splits, such as specific percentages or sample counts for each partition. |
| Hardware Specification | Yes | For both these datasets, we implement our algorithms using Python 3.9.7 on an M1 Macbook Pro with 32GB of RAM. ... on a shared virtual compute cluster with 2x64 core AMD Epyc 7763 CPUs (Zen3) with 2.45Ghz 3.5GHz clock frequency, 2TB DDR4 RAM and 256 MB L3 cache. |
| Software Dependencies | No | The paper mentions 'Python 3.9.7', 'Numpy', 'Numba', 'Tensorflow', 'Pytorch', and 'PDAL'. However, only Python is specified with a version number (3.9.7), while the other software components lack version details. |
| Experiment Setup | Yes | In our experiments, we treat the number of levels of grids to use as a tuneable parameter in our implementation and find that a very small number suffices for high quality results in the importance sampling phase. ... For the DEEP and Turing datasets, Figures 3 (d) and 3 (e) show the sample complexity vs relative error trade-offs for the best parameter choice (both 64 106) compared to uniform sampling. ... We set the number of samples for uniform sampling and importance sampling (our algorithm) such that they both output estimates with (close to) 2% relative error. |