Regularized Wasserstein Means for Aligning Distributional Data

Authors: Liang Mi, Wen Zhang, Yalin Wang5166-5173

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the scalability and robustness of our method with examples in domain adaptation, point set registration, and skeleton layout. We evaluate our method on the office-31 dataset (Saenko and others 2010). Table 1: Classification Accuracy (%) on Office-31 W A
Researcher Affiliation Academia Liang Mi, Wen Zhang, Yalin Wang Arizona State University {liangmi, wzhan139, ylwang}@asu.edu
Pseudocode Yes Algorithm 1: Wasserstein Means Algorithm 2: Regularized Wasserstein Means
Open Source Code Yes Code is available at https://github.com/icemiliang/pyvot
Open Datasets Yes We evaluate our method on the office-31 dataset (Saenko and others 2010).
Dataset Splits No The paper describes data selection (e.g., 'randomly select 20 samples per class from Amazon and 10 samples per class from Webcam') but does not specify explicit train/validation/test splits or cross-validation setup.
Hardware Specification Yes CPU: Intel i5-7640x 4.0 GHz.
Software Dependencies No The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with their respective versions.
Experiment Setup Yes The regularization weight of OTDA Laplacian is 0.3. It is from a search in {1, 0.3, 0.1, 0.03, 0.01}. The weight of RWM is 1 from a search in {3, 1, 0.3, 0.1, 0.03, 0.01}