Improving Semi-Supervised Target Alignment via Label-Aware Base Kernels
Authors: Qiaojun Wang, Kai Zhang, Guofei Jiang, Ivan Maric
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This section compares our method with a number of state-of-the-art algorithms for semi-supervised kernel design, for both classification and regression. ... Results are reported in Table 1. |
| Researcher Affiliation | Collaboration | 1Department of Electrical and Computer Engineering, Rutgers University, Piscataway, NJ 08854 USA qjwang@eden.rutgers.edu, marsic@ece.rutgers.edu 2NEC Laboratories America, Inc. 4 Independence Way, Princeton, NJ 08540 USA {kzhang,gfj}@nec-labs.com |
| Pseudocode | Yes | Algorithm 1 Input: labeled samples Xl = {xi}l i=1, unlabeled sample set Xu = {xi}n i=l+1; Gaussian Kernel k( , ), label Y = [y1, y2, ..., yc] Rl c. |
| Open Source Code | No | No statement explicitly providing open-source code for the methodology, or a link to a code repository, was found. |
| Open Datasets | Yes | This section compares our method with a number of state-of-the-art algorithms for semi-supervised kernel design, for both classification and regression. ... Digit1, USPS, COIL2, BSI, COIL, g241n, Text, usps38, usps49, usps56, usps27, odd/even ... The task is indoor location estimation using received signal strength(RSS) that a client device received from Wi-Fi access points (Yang, Pan, and Zheng 2000). |
| Dataset Splits | No | No specific training/validation/test dataset splits with exact percentages, sample counts, or citations to predefined validation splits were found. The paper mentions '50 labeled samples randomly chosen for each class' and training/test set usage, but not a distinct validation split. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments were provided. |
| Software Dependencies | No | The paper mentions 'libsvm package' but does not provide a specific version number. No other software dependencies with version numbers are listed. |
| Experiment Setup | Yes | For the kernel width, we first compute b0 as the inverse of the average squared pairwise distances, and then choose b among b0 { 1/5, 1, 5, 10} that gives the best performance. The parameter δ and ϵ are chosen from {10^-5, 10^-3, 10^-1, 1}. ... The regularization parameter C is chosen as{0.1, 1, 10, 100, 1000, 10000}. |