Multi-View Active Learning for Video Recommendation
Authors: Jia-Jia Cai, Jun Tang, Qing-Guo Chen, Yao Hu, Xiaobo Wang, Sheng-Jun Huang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both classification datasets and real video recommendation tasks validate that the proposed approach can significantly reduce the annotation cost. |
| Researcher Affiliation | Collaboration | Jia-Jia Cai1 , Jun Tang2 , Qing-Guo Chen2 , Yao Hu2 , Xiaobo Wang2 and Sheng-Jun Huang1 1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, China 2You Ku Cognitive and Intelligent Lab, Alibaba Group, Hangzhou, China {caijia, huangsj}@nuaa.edu.cn, {donald.tj, qingguo.cqg, yaoohu, yongshu.wxb}@alibaba-inc.com |
| Pseudocode | Yes | Algorithm 1 The MVAL algorithm |
| Open Source Code | No | The paper does not provide any explicit statement about releasing its source code or a link to a code repository. |
| Open Datasets | Yes | You Tube Multiview Video Games [Madani et al., 2013]. This dataset contains about 30k instances spread over 30 categories. Each instance is described by 13 feature types, from 3 high-level feature families: text, visual, and auditory feature. Wikipedia Articles [Rasiwasia et al., 2010]. This dataset contains 2,669 articles spread over 10 categories. Every article contains a single image and at least 70 words. |
| Dataset Splits | No | The paper specifies training and testing splits (e.g., '70% examples for training, and the other one with 30% examples for testing') but does not explicitly mention a separate validation set split or its details. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper states that 'All experiments are implemented in python with scikit-learn and Py Torch' but does not provide specific version numbers for Python or the libraries. |
| Experiment Setup | Yes | For the V2T model, we use a four-layer multi-layer perceptron (MLP). More specifically, the numbers of units in every layer are 2048-1024-512-32. And the model of recommendation is a four-layer MLP too, whose architecture is 64-48-32-1. The optimizer is SGD and the learning rate is 0.01. We split annotated data and unannotated data into 100 batches evenly and respectively. In each iteration, an annotated batch and an unannotated batch are used to train the model. |