A New Robust Partial p-Wasserstein-Based Metric for Comparing Distributions
Authors: Sharath Raghvendra, Pouyan Shirzadian, Kaiyi Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our distance function achieves higher accuracy in comparison to the 1-Wasserstein, 2-Wasserstein, and TV distances for image retrieval tasks on noisy real-world data sets. |
| Researcher Affiliation | Academia | Sharath Raghvendra 1 Pouyan Shirzadian 2 Kaiyi Zhang 2 1North Carolina State University 2Virginia Tech. |
| Pseudocode | No | The paper describes algorithms in Section 5 but does not provide pseudocode or labeled algorithm blocks in the main text or appendices. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., specific repository link or explicit code release statement) for its own source code. |
| Open Datasets | Yes | For the MNIST, CIFAR-10, and COREL datasets, our distance produces a higher accuracy in comparison to the 1-Wasserstein, 2-Wasserstein, and the TV distances. |
| Dataset Splits | No | The paper describes selecting "2k images as the labeled dataset and randomly select 50 images as the query" for image retrieval. This implies a query/search space setup rather than explicit train/validation/test splits for model training, validation, and testing as typically understood in machine learning. |
| Hardware Specification | No | We would like to acknowledge Advanced Research Computing (ARC) at Virginia Tech, which provided us with the computational resources used to run the experiments. |
| Software Dependencies | No | The paper mentions using the LMR algorithm but does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | No | The paper describes the datasets and perturbation scenarios for image retrieval, and sample sizes for convergence rate experiments. However, it does not provide specific hyperparameters (e.g., learning rate, batch size, epochs, optimizer settings) or system-level training configurations for any models. |