Few-Shot Knowledge Graph Completion
Authors: Chuxu Zhang, Huaxiu Yao, Chao Huang, Meng Jiang, Zhenhui Li, Nitesh V. Chawla3041-3048
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two public datasets demonstrate that FSRL outperforms the state-of-the-art. |
| Researcher Affiliation | Collaboration | 1University of Notre Dame, 2Pennsylvania State University, 3JD Finance American Corporation |
| Pseudocode | Yes | Algorithm 1: FSRL Meta-Training |
| Open Source Code | No | The paper mentions using 'Pytorch1' for implementation with a footnote linking to the PyTorch website (https://pytorch.org/), which is a third-party tool, but does not provide concrete access to the authors' own source code for the described methodology. |
| Open Datasets | Yes | We use two public datasets for experiments. The first one is based on NELL (Mitchell et al. 2018), a system that continuously collects structured knowledge from webs. The second one is based on Wikidata (Vrandeˇci c and Kr otzsch 2014). |
| Dataset Splits | Yes | In addition, we use 51/5/11 task relations for training/validation/testing in NELL and the division is set to 133/16/34 in Wiki. |
| Hardware Specification | No | The paper states, 'We employ Pytorch1 to implement our model and further conduct it on a server with GPU machines,' which is too vague and does not provide specific hardware details such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions implementing the model using 'Pytorch' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The embedding dimension is set to 100 and 50 for NELL and Wiki dataset, respectively. The maximum number of local neighbors in heterogeneous neighbor encoder is set to 30 for both datasets. In addition, we use LSTM as the reference set aggregator and matching processor. The dimension of LSTMs hidden state is set to 200 and 100 for NELL and Wiki dataset, respectively. The number of recurrent steps equals 2 in matching network. We use the Adam optimizer (Kingma and Ba 2015) to update model parameters. The initial learning rate equals 0.001 and the weight decay is 0.25 for each 10k training steps. The margin distance and trade-off factor in the objective function are set to 5.0 and 0.0001, respectively. |