Positive-Unlabeled Learning with Adversarial Data Augmentation for Knowledge Graph Completion
Authors: Zhenwei Tang, Shichao Pei, Zhao Zhang, Yongchun Zhu, Fuzhen Zhuang, Robert Hoehndorf, Xiangliang Zhang
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on real-world benchmark datasets demonstrate the effectiveness and compatibility of our proposed method. |
| Researcher Affiliation | Academia | 1King Abdullah University of Science and Technology 2Institute of Computing Technology, Chinese Academy of Sciences 3Institute of Artificial Intelligence, Beihang University 4SKLSDE, School of Computer Science, Beihang University 5University of Notre Dame |
| Pseudocode | No | The optimization process of PUDA is outlined in Algorithm 1 in supplementary material. |
| Open Source Code | No | The paper mentions 'the source code of PUDA' in the implementation details but provides no link or explicit statement about its public availability or open-sourcing. |
| Open Datasets | Yes | We evaluate PUDA mainly on two benchmark datasets, namely FB15k-237 [Toutanova et al., 2015] and WN18RR [Dettmers et al., 2018]. In addition, we use Open Bio Link [Breit et al., 2020] to evaluate PUDA with given true negative triples. |
| Dataset Splits | No | The paper mentions 'In the training phase, hyperparameters are tuned by grid search' but does not provide specific details about the dataset split used for validation or its size/proportion. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or frameworks used in the implementation. |
| Experiment Setup | No | The paper states 'In the training phase, hyperparameters are tuned by grid search' but does not provide the specific values for these hyperparameters or other system-level training settings like learning rates, batch sizes, or optimizers. |