CPa-WAC: Constellation Partitioning-based Scalable Weighted Aggregation Composition for Knowledge Graph Embedding
Authors: Sudipta Modak, Aakarsh Malhotra, Sarthak Malik, Anil Surisetty, Esam Abdel-Raheem
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The results from our experiments on standard databases, such as Wordnet and Freebase, show that by achieving meaningful partitioning, any knowledge graph can be broken down into subgraphs and processed separately to learn embeddings. |
| Researcher Affiliation | Collaboration | Sudipta Modak1,2 , Aakarsh Malhotra2 , Sarthak Malik2 , Anil Surisetty2 , Esam Abdel-Raheem1 1Department of Electrical and Computer Engineering, University of Windsor, ON, Canada 2AI Garage, Mastercard, Gurugram, Haryana, India |
| Pseudocode | Yes | Algorithm 1 Louvain Constellation Partitioning |
| Open Source Code | Yes | The code is made available for the research community1, and summarized in Algorithm 1. 1https://github.com/ganzagun/CPa-WAC |
| Open Datasets | Yes | As summarized in Table 1, we use the most widely used Wordnet [Miller, 1995] and Freebase [Bollacker et al., 2008] for our experiments. |
| Dataset Splits | Yes | Table 1: Dataset E R Training Valid Test |
| Hardware Specification | Yes | All experiments have been conducted on an I7-13700, 2.1 GHz system with 32 GB RAM and NVIDIA RTX A2000 12 GB GPU. |
| Software Dependencies | No | The paper mentions using "Adam W optimizer" and libraries like "Pytorch-Biggraph" and "DGL-KE" but does not specify version numbers for any of the software dependencies used in their implementation. |
| Experiment Setup | Yes | The Adam W optimizer is utilized to train the weights of the proposed architecture for a total of 400 epochs for all partition-based experimentation. Furthermore, all state-of-the-art models have been implemented using the same hyperparameter settings as the proposed architecture. This setting includes batch number, regularization rate, weight decay, learning rate, and the same embedding dimensions for each model. |