Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Tree-Sliced Wasserstein Distance with Nonlinear Projection
Authors: Thanh Tran, Hoang V. Tran, Thanh Chu, Huyen Trang Pham, Laurent El Ghaoui, Tam Le, Tan Minh Nguyen
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we validate our proposed metric through extensive numerical experiments for Euclidean and spherical datasets. Applications include gradient flows, self-supervised learning, and generative models, where our methods demonstrate significant improvements over recent SW and TSW variants. The code is publicly available at https://github.com/thanhqt2002/ Nonlinear TSW. |
| Researcher Affiliation | Collaboration | 1Vin University 2National University of Singapore 3Movian AI 4The Institute of Statistical Mathematics. |
| Pseudocode | Yes | Appendix D.1. Algorithm of proposed Tree-Sliced Distances. We describe the pseudo-codes for \ Circular TSW, \ Spatial TSW, \ Spatial STSW in Algorithms 1, 2, 3 respectively. |
| Open Source Code | Yes | The code is publicly available at https://github.com/thanhqt2002/ Nonlinear TSW. |
| Open Datasets | Yes | All models are trained for 1800 epochs on the CIFAR10 dataset (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions several datasets like CIFAR-10, 25 Gaussians dataset, and a mixture of 12 von Mises-Fisher distributions. However, it does not explicitly state the training/test/validation splits for these datasets within the paper's text. For instance, for CIFAR-10, it only mentions the dataset without detailing how it was split for their experiments. |
| Hardware Specification | Yes | In this section, we conduct a runtime and memory analysis of Spatial TSW, Circular TSW, and Circular TSWr=0 with respect to the number of supports and the support s dimension on a single NVIDIA A100 GPU. The denoising diffusion experiments were executed in parallel on two NVIDIA A100 GPUs, with each run lasting around 50 hours. All spherical experiments were conducted on a single NVIDIA A100 GPU. |
| Software Dependencies | No | The paper mentions using 'PyTorch s automatic kernel fusion (via torch.compile )' which implies PyTorch is used, and 'Adam (Kinga et al., 2015)' as an optimizer. However, specific version numbers for these or any other software libraries are not provided. |
| Experiment Setup | Yes | All models are trained for 1800 epochs on the CIFAR10 dataset (Krizhevsky et al., 2009). For Tree-Sliced methods, including our own, we set L = 2500 and k = 4. For vanilla SW and SW variants, we follow Nguyen et al. (2024b) and use L = 10000. The learning rate is also set according to Nguyen et al. (2024b), where lrd = 1.25e 4 and lrg = 1.6e 4. For Spatial TSW, we define h(y) = y + y3, and for Circular TSW, we set r = 0.01. The standard deviation in tree sampling follows Tran et al. (2025b) and is set to 0.1. To evaluate runtime, we use a batch size of 64 and measure time on a single NVIDIA A100 GPU. |