KitcheNette: Predicting and Ranking Food Ingredient Pairings using Siamese Neural Network
Authors: Donghyeon Park, Keonwoo Kim, Yonggyu Park, Jungwoon Shin, Jaewoo Kang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As the results demonstrate, our model not only outperforms other baseline models, but also can recommend complementary food pairings and discover novel ingredient pairings. |
| Researcher Affiliation | Academia | Donghyeon Park, Keonwoo Kim, Yonggyu Park, Jungwoon Shin and Jaewoo Kang Korea University {parkdh, akim, yongqyu, jungwoonshin, kangj}@korea.ac.kr |
| Pseudocode | No | The paper describes the model architecture and mathematical formulations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/dmis-lab/Kitche Nette |
| Open Datasets | Yes | In this work, we utilized Recipe1M [Marin et al., 2018], a dataset containing approximately one million recipes and their corresponding images which were collected from multiple popular websites related to cooking. |
| Dataset Splits | No | The paper mentions 'Validation' in Table 3 for performance metrics, implying a validation set was used, but it does not provide specific details on the split percentages or counts for the validation set. |
| Hardware Specification | No | The paper does not provide any specific hardware specifications (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Python Scikit-learn [Pedregosa et al., 2011] package' but does not provide specific version numbers for Scikit-learn or any other software dependencies. |
| Experiment Setup | Yes | We train our proposed model to minimize the loss function (Mean Squared Error) which can be expressed as follows: Θ(yab Yab)2 where L is the computed loss function to be minimized during training, Θ are the model parameters to be trained, yab is the true score value, Yab is the predicted score value, and N is the total number of input pairs used for training. We use the Adam optimizer for our model. |