Safe Abductive Learning in the Presence of Inaccurate Rules
Authors: Xiao-Wen Yang, Jie-Jing Shao, Wei-Wei Tu, Yu-Feng Li, Wang-Zhou Dai, Zhi-Hua Zhou
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on diverse tasks show that our method can tolerate at least twice as many inaccurate rules as accurate ones and achieve highly competitive performance while other methods can t. |
| Researcher Affiliation | Collaboration | 1National Key Laboratory for Novel Software Technology, Nanjing University, China School of Artificial Intelligence, Nanjing University, China 24Paradigm Inc., Beijing, China {yangxw, shaojj, liyf, daiwz, zhouzh}@lamda.nju.edu.cn, tuww.cn@gmail.com |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links or explicit statements about releasing source code for the described methodology. |
| Open Datasets | Yes | MNIST Addition This task was first introduced by Deep Problog (Manhaeve et al. 2018) which contains two subtasks, Single-digit and Multi-digit. The input of the first subtask is a pair of MNIST images (Le Cun et al. 1998), and the output is the sum of the individual digits. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits with percentages, sample counts, or references to predefined splits. |
| Hardware Specification | Yes | All experiments are repeated five times with Nvidia Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions using a pre-trained BERT model and Le Net-5 as perception models, but it does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The σ is the hyperparameter that selects with high confidence and we set σ = 0.99 in all our experiments. |