Wasserstein Propagation for Semi-Supervised Learning
Authors: Justin Solomon, Raif Rustamov, Leonidas Guibas, Adrian Butscher
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run our scheme through a number of tests demonstrating its strengths and weaknesses compared to other potential methods for propagation. We compare Wasserstein propagation with the strategy of propagating probability distribution functions (PDFs) directly, as described in 2.2. (...) We now evaluate our techniques on real-world input. |
| Researcher Affiliation | Academia | Justin Solomon JUSTIN.SOLOMON@STANFORD.EDU Raif M. Rustamov RUSTAMOV@STANFORD.EDU Leonidas Guibas GUIBAS@CS.STANFORD.EDU Department of Computer Science, Stanford University, 353 Serra Mall, Stanford, California 94305 USA Adrian Butscher ADRIAN.BUTSCHER@GMAIL.COM Max Planck Center for Visual Computing and Communication, Campus E1 4, 66123 Saarbr ucken, Germany |
| Pseudocode | No | The paper describes the algorithmic steps in Section 5 "Algorithm Details" (e.g., "For each i {1, . . . , m}, we solve the system from (3): g = 0 v V \ V0 g(v) = (F 1)i v V0 ."), but it does not present them in a structured pseudocode or algorithm block format. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code, nor does it provide links to a code repository. |
| Open Datasets | No | The paper mentions using data from sources like the "National Climatic Data Center", "Wind Sat Remote Sensing Systems", and "Carbon Dioxide Information Analysis Center" in footnotes. However, it does not provide concrete access information such as specific links, DOIs, repository names, or formal citations with author names and year for publicly available datasets. |
| Dataset Splits | No | The paper describes a semi-supervised setup where a subset of vertices (V0) has fixed labels, and the method propagates to the remainder (V \ V0). It uses this remainder for evaluating against ground truth. However, it does not specify explicit training/validation/test dataset splits with percentages or sample counts in the typical machine learning sense. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments. It only implies computational execution. |
| Software Dependencies | No | The paper mentions using "large-scale barrier method solvers" but does not specify any software names with version numbers for reproducibility (e.g., specific libraries, frameworks, or programming languages with versions). |
| Experiment Setup | No | The paper describes algorithmic details such as discretizing the domain of F^-1 using evenly-spaced samples and applying implicit time stepping for the heat equation. However, it does not provide specific values for hyperparameters or training configurations (e.g., number of samples 'm', specific time step 't', or optimization settings) to replicate the experiments. |