Open Rule Induction
Authors: Wanyun Cui, Xingran Chen
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted extensive experiments to verify the quality and quantity of the inducted open rules. |
| Researcher Affiliation | Academia | Wanyun Cui , Xingran Chen Shanghai University of Finance and Economics cui.wanyun@sufe.edu.cn, xingran.chen.sufe@gmail.com |
| Pseudocode | Yes | Algorithm 1: Supported beam search |
| Open Source Code | Yes | Code and datasets are available at https://github.com/chenxran/Orion |
| Open Datasets | Yes | Code and datasets are available at https://github.com/chenxran/Orion |
| Dataset Splits | No | The paper mentions using several relation extraction datasets (Google-RE, TREx, NYT10, WIKI80, Few Rel, Sem Eval) and their own 'Open Rule155' dataset. However, it does not explicitly provide specific training/validation/test split percentages or sample counts for any of these datasets in the main text that would allow reproduction of data partitioning. |
| Hardware Specification | Yes | All the experiments run over a cloud of servers. Each server has 4 Nvdia Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions using specific software components like 'Bart', 'Spacy NER library', and 'Exp BERT', but it does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | For each converted premise atom, we use Orion to induct k = 5, 10, 20 corresponding open rules. We following the settings of Exp BERT and use k = 29, 41 hypothesis atoms inducted by Orion as for Disease and Spouse, respectively. In addition, we modified Exp BERT to allow the training process to fine-tune the parameters that were frozen in the original Exp BERT, as we found that this will improve the model s effectiveness. |