Extracting Certainty from Uncertainty: Transductive Pairwise Classification from Pairwise Similarities
Authors: Tianbao Yang, Rong Jin
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present an empirical evaluation of our proposed simple algorithm for Transductive Pairwise Classification by Matrix Completion (TPCMC for short) on one synthetic data set and three real-world data sets. |
| Researcher Affiliation | Collaboration | Tianbao Yang , Rong Jin The University of Iowa, Iowa City, IA 52242 Michigan State University, East Lansing, MI 48824 Alibaba Group, Hangzhou 311121, China tianbao-yang@uiowa.edu, rongjin@msu.edu |
| Pseudocode | Yes | Algorithm 1 A Simple Algorithm for Transductive Pairwise Classification by Matrix Completion |
| Open Source Code | No | The paper does not contain any statements about making code available or provide links to a code repository. |
| Open Datasets | Yes | We further evaluate the performance of our algorithm on three real-world data sets: splice [24] 3, gisette [12] 4 and citeseer [21] 5. |
| Dataset Splits | No | The paper describes how a subset of examples (m) and a subset of their pairwise labels (10% of m x m) are randomly chosen and experiments are repeated ten times, but it does not specify explicit train/validation/test splits for the entire dataset in the conventional sense for supervised learning. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers (e.g., 'Python 3.8, PyTorch 1.9'). It mentions algorithms like Spectral Clustering and Spectral Kernel Learning, but not their specific software implementations or versions. |
| Experiment Setup | Yes | We set s = 50 in our algorithm and other algorithms as well. ... we randomly choose m = 20%n, 30%n, . . . , 90%n examples, where 10% entries of the m m label matrix are observed. ... For each given m, we repeat the experiments ten times with random selections and report the performance scores averaged over the ten trials. |