Implicitly learning to reason in first-order logic

Authors: Vaishak Belle, Brendan Juba

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we present a new theoretical approach to robustly learning to reason in first-order logic, and consider universally quantified clauses over a countably infinite domain.
Researcher Affiliation Academia Vaishak Belle University of Edinburgh & Alan Turing Institute vaishak@ed.ac.uk Brendan Juba Washington University in St. Louis bjuba@wustl.edu
Pseudocode Yes Algorithm 1 Reasoning with implicit learning
Open Source Code No The paper does not provide any statement or link regarding the availability of open-source code for the methodology described.
Open Datasets No The paper describes a theoretical framework and does not perform experiments with datasets, thus no dataset access information is provided.
Dataset Splits No The paper presents a theoretical framework and does not conduct experiments, therefore it does not provide dataset split information for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not report on experiments, thus no hardware specifications are provided.
Software Dependencies No The paper is theoretical and does not report on experiments, thus no specific software dependencies with version numbers are provided.
Experiment Setup No The paper presents a theoretical framework and does not describe any experimental setup or hyperparameters.