Approximate Algorithms for k-Sparse Wasserstein Barycenter with Outliers
Authors: Qingyuan Yang, Hu Ding
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct the experiments for our proposed algorithms and illustrate their efficiencies in practice. |
| Researcher Affiliation | Academia | School of Computer Science and Technolog University of Science and Technology of China yangqingyuan@mail.ustc.edu.cn, huding@ustc.edu.cn |
| Pseudocode | No | The paper describes the steps of its algorithms in prose but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement or a direct link to a source-code repository for the methodology described in this paper. |
| Open Datasets | Yes | We also select three widely-used datasets from the UCI repository [Dua and Graff, 2017]: Bank [Moro et al., 2012], Credit card [Yeh, 2016], Adult [Becker and Kohavi, 1996]. Finally, we provide the visualized results on the MNIST dataset [Le Cun et al., 2010]. |
| Dataset Splits | No | The paper describes the datasets used and how they are partitioned for different distributions or introduce outliers, but it does not specify explicit training, validation, or test dataset splits or ratios. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory, or computational clusters) used for running its experiments. |
| Software Dependencies | Yes | In our implementation, we use the k-means++[Arthur and Vassilvitskii, 2007] as Algorithm A and k-means-[Chawla and Gionis, 2013] as Algorithm B; we also employ the LP solver [Gurobi Optimization, LLC, 2023] as the subroutine for solving fixed-support WB with outliers. |
| Experiment Setup | No | The paper describes some procedural aspects of the experiments (e.g., how outliers are handled, retaining top k centers) and the datasets, but it does not specify concrete hyperparameters like learning rates, batch sizes, number of epochs, or optimizer settings for the algorithms used. |