Computing all Optimal Partial Transports
Authors: Abhijeet Phatak, Sharath Raghvendra, CHITTARANJAN TRIPATHY, Kaiyi Zhang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments for outlier detection as well as PU-Learning experiments on synthetic and real world dataset. Both these experiments are conducted on real datasets. |
| Researcher Affiliation | Collaboration | 1Walmart Global Tech, 2Virginia Tech {abhijeet.phatak,ctripathy}@walmart.com {sharathr,kaiyiz}@vt.edu |
| Pseudocode | No | No, the paper describes algorithms in prose rather than providing structured pseudocode blocks. For example, Section 3 describes the 'Initialization Step' and subsequent 'phases' in paragraph form. |
| Open Source Code | Yes | The code used for the experiments reported in this paper is available at https://github.com/ kaiyiz/Computing-all-optimal-partial-transport. |
| Open Datasets | Yes | Denote µ as the set of clean data containing n MNIST digits Le Cun (1998) from 0 to 4, and ν as a mixed set contaminated by digits from 5 to 9. |
| Dataset Splits | No | No, the paper mentions using specific datasets for experiments, but it does not provide explicit details about dataset splits (e.g., specific percentages or counts for training, validation, or test sets). It mentions 'n = 1k and n = 10k images' as sizes for experiments, but not how they are split for training, validation, or testing. |
| Hardware Specification | Yes | Our algorithm is implemented in Java and experiments were conducted using Python, and are executed on a machine with 2.1 GHz Intel Xeon E5-2683v4 processor with 64 GB of RAM. |
| Software Dependencies | No | No, the paper only mentions the programming languages used ('implemented in Java and experiments were conducted using Python') but does not provide specific version numbers for these languages or any libraries/frameworks used within them, such as PyTorch or TensorFlow versions. |
| Experiment Setup | Yes | We execute two sets of experiments, one with n = 1k and another with n = 10k images. For each set, we consider ε = 0.2, 0.25 and 0.3. Our algorithm computes the OT-profile ω and its first derivative Dω. Then, it uses the kneedle method (with a default sensitivity of 1) to catch a sudden rise in the first derivative Dω. |