Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
Authors: Pengda Qin, Xin Wang, Wenhu Chen, Chunyun Zhang, Weiran Xu, William Yang Wang8673-8680
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our method is model-agnostic that could be potentially applied to any version of KG embeddings, and consistently yields performance improvements on NELL and Wiki dataset. ... We present two newly constructed datasets for zero-shot knowledge graph completion and show that our method achieves better performance than various embedding-based methods. |
| Researcher Affiliation | Academia | 1Beijing University of Posts and Telecommunications, China 2University of California, Santa Barbara, USA 3Shandong University of Finance and Economics, China |
| Pseudocode | Yes | Algorithm 1 The proposed generative adversarial model for zero-shot knowledge graph relational learning. |
| Open Source Code | No | The paper mentions that baselines are implemented based on 'Open-Source Knowledge Embedding toolkit Open KE7(Han et al. 2018)', but it does not state that the authors' own code for their proposed method is released or provide a link to it. |
| Open Datasets | Yes | Because there is not available zero-shot relational learning dataset for knowledge graph, we decide to construct two reasonable datasets from the existing KG Datasets. We select NELL4 (Carlson et al. 2010) and Wikidata5 for two reasons: the large scale and the existence of official relation descriptions. |
| Dataset Splits | Yes | Dataset # Ent. # Triples # Train/Dev/Test NELL-ZS ... 139/10/32 Wiki-ZS ... 469/20/48 Table 1: Statistics of the constructed zero-shot datasets for KG link prediction. ... # Train/Dev/Test denotes the number of relations for training/validation/testing. ... We leave out a subset of Ds as the validation set Dvalid by removing all training instances of the validation relations. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Open KE' and optimizers like 'Adam' and models like 'Word2Vec' and 'BERT', but it does not provide specific version numbers for general software dependencies like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | For NELL-ZS dataset, we set the embedding size as 100. For Wiki-ZS, we set the embedding size as 50...the margin γ is set as 10.0. For feature encoder, the upper limit of the neighbor number is 50, the number of reference triples k in one training step is 30, and the learning rate is 5e 4. For the generative model, the learning rate is 1e 4, and β1, β2 are set as 0.5, 0.9 respectively. When updating the generator one time, the iteration number nd of the discriminator is 5. The dimension of the random vector z is 15, and the number of the generated relation embedding Ntest is 20. Spectral normalization is applied for both generator and discriminator. |