From Zero-Shot Learning to Cold-Start Recommendation
Authors: Jingjing Li, Mengmeng Jing, Ke Lu, Lei Zhu, Yang Yang, Zi Huang4189-4196
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on both ZSL and CSR tasks verify that the proposed method is a win-win formulation, i.e., not only can CSR be handled by ZSL models with a significant performance improvement compared with several conventional state-of-the-art methods, but the consideration of CSR can benefit ZSL as well. |
| Researcher Affiliation | Academia | Jingjing Li,1 Mengmeng Jing,1 Ke Lu,1 Lei Zhu,2 Yang Yang,1 Zi Huang3 1University of Electronic Science and Technology of China 2Shandong Normal University; 3The University of Queensland |
| Pseudocode | Yes | Algorithm 1. Low-rank Linear Auto Encoder for CSR |
| Open Source Code | No | The complete codes will be released on publication. |
| Open Datasets | Yes | For zero-shot recognition, four most popular benchmarks are evaluated. For instance, a Pascal-a Yahoo (a P&a Y) (Farhadi et al. 2009), Animal with Attribute (Aw A) (Lampert, Nickisch, and Harmeling 2014), SUN scene attribute dataset (SUN) (Patterson and Hays 2012) and Caltech-UCSD Birds-200-2011 (CUB) (Wah et al. 2011). ... For cold-start recommendation, we mainly use social data as side information. The following four datasets, which consist of image, video, blog and music recommendation, are used for evaluation. Flickr (Tang, Wang, and Liu 2012)... Blog Catalog (Tang, Wang, and Liu 2012)... You Tube (Tang, Wang, and Liu 2012)... Hetrec11-Last FM (Cantador, Brusilovsky, and Kuflik 2011)... |
| Dataset Splits | Yes | For the evaluated datasets, we split each of them into two subsets, one includes 10% of the users as new users (test dataset) for cold-start, and the remainder of 90% users are collected as training data to learn the encoder and decoder. We deploy cross-validation with grid-search to tune all hyper-parameters on training data. Specifically, we select 80% users for training and 10% for validation. The new users are randomly selected, so we build 10 training-test folds and report the average results. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. It only mentions that the complexity depends on dimensionality, not the number of samples, making it applicable to large-scale datasets. |
| Software Dependencies | No | The paper mentions that the main part of their method 'can be implemented by only one line of Matlab code', but it does not provide a specific version number for Matlab or any other software dependencies. |
| Experiment Setup | No | The paper states that 'The hyper-parameters λ and β are tuned by cross-validation using the training data.' However, it does not provide the specific values for these or any other hyperparameters, nor does it detail other system-level training settings for reproduction. |