Learning the Compositional Visual Coherence for Complementary Recommendations
Authors: Zhi Li, Bo Wu, Qi Liu, Likang Wu, Hongke Zhao, Tao Mei
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the large-scale real-world data clearly demonstrate the effectiveness of CANN compared with several state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Zhi Li1 , Bo Wu2 , Qi Liu1,3, , Likang Wu3 , Hongke Zhao4 and Tao Mei5 1Anhui Province Key Laboratory of Big Data Analysis and Application, School of Data Science, University of Science and Technology of China 2Columbia University 3School of Computer Science and Technology, University of Science and Technology of China 4The College of Management and Economics, Tianjin University 5JD AI Research |
| Pseudocode | Yes | Algorithm 1 Compositional Optimization Strategy |
| Open Source Code | Yes | The datasets and source codes are available in our project pages 1. 1https://data.bdaa.pro/BDAA Fashion/index.html |
| Open Datasets | Yes | We evaluate our proposed method on a real-world dataset, i.e., Polyvore dataset [Han et al., 2017; Vasileva et al., 2018]. |
| Dataset Splits | Yes | Then, we split the dataset into 59,212 outfits with 221,711 fashion items for training, 3,000 outfits for validation and 10,218 outfits for testing. |
| Hardware Specification | Yes | Our model and all the compared methods are developed and trained on a Linux server with two 2.20 GHz Intel Xeon E52650 v4 CPUs and four TITAN Xp GPUs. |
| Software Dependencies | No | The paper mentions software components and models like "Google Net Inception V3 model" and "Re LU function", but it does not provide specific version numbers for any software, libraries, or frameworks used. |
| Experiment Setup | Yes | The number of visual space is set to S = 4 and for each visual space, we set ds = df/S = 128 and b = 4 unless otherwise noted. Our model is trained with an initial learning rate of 0.2 and is decayed by a factor of 2 every 2 epochs. The batch size is set to 9, seed collection length k is set to 8. |