A Brand-level Ranking System with the Customized Attention-GRU Model
Authors: Yu Zhu, Junxiong Zhu, Jie Hou, Yongliang Li, Beidou Wang, Ziyu Guan, Deng Cai
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a series of experiments to evaluate the effectiveness of our proposed ranking model and test the response to the brand-level ranking system from real users on a large-scale e-commerce platform, i.e. Taobao. |
| Researcher Affiliation | Collaboration | 1State Key Lab of CAD&CG, College of Computer Science, Zhejiang University, China 2Alibaba-Zhejiang University Joint Institute of Frontier Technologies 3Alibaba Group, Hangzhou, China 4School of Information and Technology, Northwest University of China |
| Pseudocode | No | The paper describes the model and its components but does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | Our code is public https://github.com/zyody/Attention-GRU-3M |
| Open Datasets | No | A large-scale dataset is collected from Taobao. The paper does not provide concrete access information (link, DOI, citation) for this dataset, implying it is proprietary. |
| Dataset Splits | No | The paper mentions hyperparameters are "tuned via cross-validation" but does not provide specific details on the dataset splits (e.g., percentages, sample counts) used for validation or how the data was partitioned for this purpose. The dataset description focuses on how training instances are generated. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory) for running its experiments. |
| Software Dependencies | No | The paper mentions using a "publicly available python implementation" for Session-RNN but does not provide specific version numbers for Python or any other key software libraries or dependencies. |
| Experiment Setup | Yes | The number of units is empirically set to 256 for RNN models. The other hyperparameters in all models are tuned via cross-validation or set as in the original paper. Our model is optimized by Ada Grad [Duchi et al., 2011] with the log loss calculated by p(Bqu) and the label. |