Multi-Modal Multi-Task Learning for Automatic Dietary Assessment
Authors: Qi Liu, Yue Zhang, Zhenguang Liu, Ye Yuan, Li Cheng, Roger Zimmermann
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results on a real-world dataset show that our method outperforms two strong image captioning baselines significantly. |
| Researcher Affiliation | Academia | 1. Singapore University of Technology and Design 2. Bioinformatics Institute, A*STAR, Singapore 3. School of Computing, National University of Singapore 4. Zhejiang Gongshang University |
| Pseudocode | No | The paper describes the model architecture and processes using text and mathematical equations, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about making its source code available, nor does it include a link to a code repository. |
| Open Datasets | No | Our dataset is obtained from a mobile application for diet management, which allows users to create their accounts with personal information and report their diets by taking photos and attaching text descriptions. There are 283 anonymous users for privacy. |
| Dataset Splits | Yes | We select 70%, 10% and 20% of the meals as training, development and testing sets, respectively, according to their upload time, since our algorithm intends to evaluate future meals according to users historical activities. |
| Hardware Specification | Yes | All experiments are conducted on a PC with a Intel 3.4 GHz CPU, a 4 GB memory and a 8 GB 1080 GPU. |
| Software Dependencies | No | The paper mentions using 'NLTK' and 'Ada Grad' but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | We use stochastic gradient descent with mini batch sizes of 50. Dropout (...) is used to avoid overfitting, and the dropout rate is set as 0.5. We use Ada Grad as the optimizer, and the initial learning rate for Ada Grad is set as 0.5. Also, the gradient clipping (...) is adopted to prevent gradient exploding and vanishing, where gradients larger than 5 are rescaled. |