Dynamic Multi-Task Learning with Convolutional Neural Network
Authors: Yuchun Fang, Zhengyan Ma, Zhaoxiang Zhang, Xu-Yao Zhang, Xiang Bai
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate that the proposed dynamic multi-task CNN model outperforms traditional approaches. |
| Researcher Affiliation | Academia | Yuchun Fang1, Zhengyan Ma1, Zhaoxiang Zhang2,3,4,5 , Xu-Yao Zhang3, Xiang Bai6 1School of Computer Engineering and Science, Shanghai University 2Research Center for Brain-inspired Intelligence, CASIA 3National Laboratory of Pattern Recognition, CASIA 4CAS Center for Excellence in Brain Science and Intelligence Technology 5University of Chinese Academy of Sciences 6School of Electronic Information and Communication, Huazhong University of Science and Technology |
| Pseudocode | Yes | Algorithm 1 Parameter updating |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | we take the experiment on celeb A [Liu et al., 2015]. Celeb A is one of the largest face attribute dataset, which contains two hundred thousand images. ... Besides face dataset, we also test our model on cifar10. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) for training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | Each task uses the same model structure, which has five convolutional layers and two fully connected layers. The parameters of 5 convolutional layers are (40, 5x5) (kernel amount, height x width), (60, 5x5), (80, 3x3), (100, 3x3), (140, 2x2). The first fully connected layer has 360 neurons and the second one is the output layer. Between each two convolutional layers, there are non-overlapped max-pooling layers. The activation function is Re LU. |