Interstellar: Searching Recurrent Architecture for Knowledge Graph Embedding
Authors: Yongqi Zhang, Quanming Yao, Lei Chen
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real datasets demonstrate the effectiveness of the searched models and the efficiency of the proposed hybrid-search algorithm. |
| Researcher Affiliation | Collaboration | Yongqi Zhang1,3 Quanming Yao1,2 Lei Chen3 14Paradigm Inc. 2Department of Electronic Engineering, Tsinghua University 3Department of Computer Science and Engineering, HKUST |
| Pseudocode | Yes | Algorithm 1 Proposed search recurrent architecture as the Interstellar algorithm. |
| Open Source Code | Yes | 1Code is available at https://github.com/Auto ML-4Paradigm/Interstellar, and correspondence is to Q. Yao. |
| Open Datasets | Yes | Here, we illustrate the designed search space A in Section 3.1 using Countries [8] dataset... We use four cross-lingual and cross-database subset from DBpedia and Wikidata generated by [18]... We use three famous benchmark datasets, WN18-RR [13] and FB15k-237 [48], which are more realistic than their superset WN18 and FB15k [7], and YAGO3-10 [31], a much larger dataset. |
| Dataset Splits | No | The paper mentions using 'validation set' but does not explicitly provide specific training/validation/test dataset splits (e.g., percentages or sample counts) within the main text. It defers some details to external papers (e.g., '[18]') or appendix without providing the concrete numbers. |
| Hardware Specification | Yes | Experiments are written in Python with Py Torch framework [35] and run on a single 2080Ti GPU. |
| Software Dependencies | No | The paper mentions 'Python with Py Torch framework [35]' but does not provide specific version numbers for Python, PyTorch, or other software dependencies. |
| Experiment Setup | No | The paper states 'Training details of each task are given in Appendix A.5.' and mentions 'hyper-parameters, i.e. learning rate, decay rate, dropout rate, L2 penalty and batch-size (details in Appendix A.5)', but the specific values for these experimental setup details are not provided in the main text. |