Learning Multiple Tasks with Multilinear Relationship Networks

Authors: Mingsheng Long, ZHANGJIE CAO, Jianmin Wang, Philip S. Yu

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that MRN yields state-of-the-art results on three multi-task learning datasets.
Researcher Affiliation Academia Mingsheng Long, Zhangjie Cao, Jianmin Wang, Philip S. Yu School of Software, Tsinghua University, Beijing 100084, China {mingsheng,jimwang}@tsinghua.edu.cn caozhangjie14@gmail.com psyu@uic.edu
Pseudocode No The paper describes the algorithm steps in paragraph text within Section 4.2 but does not provide a formally structured pseudocode or algorithm block.
Open Source Code No Codes and datasets will be released.
Open Datasets Yes Office-Caltech [12] This dataset is the standard benchmark for multi-task learning and transfer learning. ... Office-Home1 [26] ... Image CLEF-DA2
Dataset Splits Yes We conduct model selection for all methods using five-fold cross-validation on the training set.
Hardware Specification No The paper mentions using Alex Net and VGGnet as base architectures, but does not specify any particular hardware (e.g., GPU models, CPU, memory) used for the experiments.
Software Dependencies No The paper mentions using Alex Net and VGGnet architectures and De CAF7 features, but does not specify any software versions (e.g., Python, TensorFlow, PyTorch versions or specific library versions).
Experiment Setup Yes As the classifier layer is trained from scratch, we set its learning rate to be 10 times that of the other layers. We use mini-batch stochastic gradient descent (SGD) with 0.9 momentum and learning rate decaying strategy, and select learning rate between 10 5 and 10 2 by stepsize 10 1 2 .