Deep Learning at Alibaba
Authors: Rong Jin
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We run the online experiments to verify the effectiveness of the proposed deep learning framework. The test scenario is to estimate both CTR and CVR for the displayed items returned by our search engine, which are used to rerank the returned items. We compare the ranking results for the proposed deep learning framework to those generated by the linear model (i.e. logistic regression model). The A/B tests show that, using the proposed method, we observe a 6% improvement in both CVR and GMV compared to directly using the linear model. |
| Researcher Affiliation | Industry | Rong Jin Alibaba Group, Hang Zhou, China jinrong.jr@alibaba-inc.com |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | No statement regarding open-source code availability for the described methods or a link to a code repository is provided. |
| Open Datasets | Yes | The target domain in our experiment is images from Open Images dataset [Krasin et al., 2016]. Openimages: A public dataset for large-scale multi-label and multi-class image classification. Dataset available from https://github.com/openimages, 2016. We run the proposed algorithm against the imagenet dataset to discretize the weights of Resnet-18 [He et al., 2016]. We run the proposed algorithm for object detection, using either Darknet+SSD [Liu et al., 2016] or VGG16+SSD, over Pascal VOC 2007. |
| Dataset Splits | No | The paper mentions '150,000 train samples and 150,000 test samples' but does not specify a validation set split or details for other experiments. |
| Hardware Specification | Yes | We run our algorithm over the Aliyun ODPS platform with 2000 cores to train the model, which took 2 hours for each epoch. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers (e.g., library versions, frameworks). |
| Experiment Setup | Yes | The optimal transform is obtained by minimizing the distance between images with similar tags and at the same time maximizing the distance between images with different tags... Using vectors xa i , xp i , and xn i , we form a triplet and define the triplet loss as follows... The model was trained using the Adam optimizer. |