Improving Entity Recommendation with Search Log and Multi-Task Learning
Authors: Jizhou Huang, Wei Zhang, Yaming Sun, Haifeng Wang, Ting Liu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach using large-scale, real-world search logs of a widely used commercial Web search engine. The experimental results show that incorporating context information significantly improves entity recommendation, and learning the model in a multi-task learning setting could bring further improvements. |
| Researcher Affiliation | Collaboration | Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China Baidu Inc., Beijing, China {huangjizhou01, zhangwei32, sunyaming, wanghaifeng}@baidu.com, tliu@ir.hit.edu.cn |
| Pseudocode | Yes | Algorithm 1 Training the multi-task DNN model |
| Open Source Code | No | The paper does not include an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The data sets are collected from a commercial Web search engine and are described as 'large-scale, real-world data sets'. There is no indication or link provided for public access to these datasets. |
| Dataset Splits | Yes | Tr was randomly split into training set T l r (80%), validation set T v r (10%), and test set T t r (10%). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions using Bidirectional LSTM and Gradient Boosted Decision Tree (GBDT) but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | We use 2-layer Bi LSTM with 128 hidden units. The dimensions of word embeddings, query embeddings, document embeddings, and entity embeddings are set to 256. The mini-batch size is set to 512. The learning rate is initially set to 0.1, which is decayed by a factor of 0.9 after every 10 epochs. |