Enhancing Multimodal Knowledge Graph Representation Learning through Triple Contrastive Learning
Authors: Yuxing Lu, Weichen Zhao, Nan Sun, Jinzhuo Wang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted comprehensive comparisons with several knowledge graph embedding methods to validate the effectiveness of our KG-MRI model. |
| Researcher Affiliation | Collaboration | 1Department of Big Data and Biomedical AI, College of Future Technology, Peking University 2Tencent AI Lab 3School of Computer Science and Technology, Soochow University 4School of Computer Science and Technology, Huazhong University of Science & Technology |
| Pseudocode | No | The paper includes mathematical equations and a framework diagram (Figure 1), but no structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We constructed a biomedical knowledge graph named HMKG from the Human Metabolome Database (HMDB, https://hmdb.ca/, [Wishart et al., 2022]) |
| Dataset Splits | Yes | In our experiment, we meticulously partitioned the dataset into training, validation, and testing sets following an 8:1:1 ratio. |
| Hardware Specification | Yes | The training was conducted over 1000 epochs on a single Tesla A100 GPU. |
| Software Dependencies | No | The paper mentions specific models and optimizers like 'Chem BERTa-2', 'Sci BERT', 'Adam W', and 'Cosine Annealing LR', but does not provide version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The hyperparameters were chosen to strike a balance between the model s efficiency and reliability. Both the entity and relation embeddings were initialized with a dimensionality of 128. The learning rate was established at 1.0 x 10^-3. |