Convex Co-Embedding for Matrix Completion with Predictive Side Information
Authors: Yuhong Guo
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted experiments on two types of applications, transductive multi-label learning with incomplete labels and recommendation matrix completion. In this section we present the experimental settings and results. |
| Researcher Affiliation | Academia | Yuhong Guo School of Computer Science Carleton University, Ottawa, Canada yuhong.guo@carleton.ca |
| Pseudocode | Yes | Algorithm 1 Fast Proximal Gradient Descent Algorithm |
| Open Source Code | No | The paper does not provide any statement regarding the release of source code, nor does it include a link to a code repository. |
| Open Datasets | Yes | We conducted experiments for transductive incomplete multi-label learning on ten standard multi-label datasets for web page classification from yahoo.com (Ueda and Saito 2002). We have also conducted recommendation experiments on three real-word Amazon datasets: Beauty, Office and Sports. |
| Dataset Splits | Yes | For each dataset, we randomly sampled 10% instances for testing and used the remaining 90% data for training. We did parameter selection for the regularization parameter γ from the set 2{ 9, 8, ,8,9} by using two-fold cross-validation on the labeled training data. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | The proposed approach, Co Embed, has three trade-off parameters α, γ and ρ. Note as shown in Proposition 1, any difference between α and γ will simply lead to rescaling the input feature values. Hence we just set α = γ. Moreover, the ρ parameter controls the degree of the soft approximation for the equality constraints. It should be a reasonably large value. In our experiments, we set ρ = 100. We did parameter selection for the regularization parameter γ from the set 2{ 9, 8, ,8,9} by using two-fold cross-validation on the labeled training data. |