Submodular framework for structured-sparse optimal transport
Authors: Piyushi Manupriya, Pratik Jawanpuria, Karthik S. Gurumoorthy, Sakethanath Jagarlapudi, Bamdev Mishra
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experimental Results We evaluate the proposed approach in various applications. Experiments related to general sparse transport plans are discussed in Section 5.1, while those related to column-wise sparse transport plans are discussed in Section 5.2. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science and Engineering, IIT Hyderabad, India. 2Microsoft, India. 3Walmart Global Tech, India. |
| Pseudocode | Yes | Algorithm 1 Stochastic OMP algorithm for maximizing weakly submodular problems with cardinality constraint... Algorithm 2 OMP algorithm for maximizing weakly submodular problems with matroid constraint |
| Open Source Code | Yes | Code can be downloaded from https://github.com/Piyushi-0/Sparse-UOT. |
| Open Datasets | Yes | CIFAR dataset. We next focus on the binary classification problem of identifying whether a given image belongs to the CIFAR-10 dataset or the CIFAR-10-rotate dataset (Chen et al., 2022). ... The evaluation is performed on the aligned Wikipedia sentences in an unsupervised setting with the sure alignments, i.e., with the alignments agreed upon by multiple annotators (Arase et al., 2023). |
| Dataset Splits | Yes | The Wiki dataset consists of 2514 training instances, 533 validation instances, and 1052 test instances. ... The randomly sampled validation split consists of 15% instances. ... The validation data split is generated following the procedure given by Luo et al. (2023). |
| Hardware Specification | Yes | The experiments in Section 5.2 are done on an NVIDIA A100-SXM4-40GB GPU, and the remaining experiments are done on an NVIDIA GeForce RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions software components like 'Adam optimizer' and 'LBFGS optimizer from scipy.optimize', but it does not provide specific version numbers for these or other software dependencies, nor does it list specific programming language versions or library versions (e.g., PyTorch, TensorFlow) used for implementation. |
| Experiment Setup | Yes | The hyperparameters chosen after validation are as follows. The proposed approach uses RBF kernel with sigma2 = 1 and lambda1 = 100. The hyperparameters (alpha, rho) in GSOT are set as (10, 1). ... For our approach, the chosen kernel is RBF with sigma2 = m/8, lambda1 = 1, and lambda2 = 0.1. ... The hyperparameter chosen for the proposed approach: IMQ-v2 kernel with sigma2 = 100 and regularization parameters lambda1 = 100, lambda2 = 10. The coefficient of regularization chosen for SCOT is lambda = 10. |