Graph-based Semi-supervised Learning: Realizing Pointwise Smoothness Probabilistically
Authors: Yuan Fang, Kevin Chang, Hady Lauw
ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experimental Evaluation We empirically compare PGP with various SSL algorithms, and validate the claims in this paper. ... Figure 3. Performance comparison. In each column, the best result and those not significantly different (p > .05 in t-test) are bolded. |
| Researcher Affiliation | Academia | Yuan Fang FANG2@ILLINOIS.EDU Kevin Chen-Chuan Chang KCCHANG@ILLINOIS.EDU Hady W. Lauw HADYWLAUW@SMU.EDU.SG University of Illinois at Urbana-Champaign, USA Advanced Digital Sciences Center, Singapore Singapore Management University, Singapore |
| Pseudocode | No | The paper describes the iterative algorithm for solving πy in text (Section 3.3) but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | No | The paper does not provide any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Datasets. We use six public datasets shown in Fig. 2. Three of them, Digit1, Text and USPS, come from a benchmark (Chapelle et al., 2006). We also use three datasets from UCI repository (Frank & Asuncion, 2010) |
| Dataset Splits | Yes | For a given |L|, we sample 200 runs, where in each run |L| points are randomly chosen as labeled, and the rest are treated as unlabeled. The sampling ensures at least one labeled point for each class. 5% of the runs are reserved for model selection, and the remaining are for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions using 'An existing implementation (Melacci & Belkin, 2011) is used for LSVM, whereas our own implementations are used for the others,' but it does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | Model selection is performed on the reserved runs. For each algorithm, we search k {5, 10, 15, 20, 25} to construct the k NN graph. GRF and GGS has no other parameters. For LSVM, we search γA {1e 6, 1e 4, .01, 1, 100}, r {0, 1e 4, .01, 1, 100, 1e4, 1e6}. For MP, we search α {.5, 1, 5, 20, 100}, u {1e 8, 1e 6, 1e 4, .01, .1, 1, 10}, v {1e 8, 1e 6, 1e 4, .01, .1}. For PARW, we search α {1e 8, 1e 6, 1e 4, .01, 1, 100}. For PGP, we search α {.01, .02, .05, .1, .2, .5}. |