Bipartite Edge Prediction via Transductive Learning over Product Graphs
Authors: Hanxiao Liu, Yiming Yang
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets for collaborative filtering, citation network analysis and prerequisite prediction of online courses show advantageous performance of the proposed approach over other state-of-the-art methods. |
| Researcher Affiliation | Academia | Hanxiao Liu HANXIAOL@CS.CMU.EDU Carnegie Mellon University, Pittsburgh, PA 15213 USA Yiming Yang YIMING@CS.CMU.EDU Carnegie Mellon University, Pittsburgh, PA 15213 USA |
| Pseudocode | No | The paper describes algorithms and optimization steps but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We used Movie Lens-100K, a benchmark data set in collaborative filtering... We also used Cora (Sen et al., 2008)... Courses (Yang et al., 2015) is a new set of course descriptions and prerequisite links we collected from the web sites of Massachusetts Institute of Technology... URL http: //doi.acm.org/10.1145/2684822.2685292. |
| Dataset Splits | Yes | All the above data were used in 5-fold cross validation settings: we used 60% of the data for training, 20% for parameter tuning, and 20% for testing. By rotating the 5-fold training/validating/test subsets we measure the performance of each method on average. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific software dependency details with version numbers (e.g., library names with versions). |
| Experiment Setup | Yes | Based on the features of the vertices, we construct G and H as sparse, symmetrized k NN graphs under cosine similarity. The value of k is tuned during cross validation. For collaborative filtering we use mean squared error (MSE) as the loss function, and for other two tasks we use the pairwise ranking loss. |