Neural Belief Reasoner
Authors: Haifeng Qian
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | This paper studies NBR in two tasks. The first is a synthetic unsupervisedlearning task, which demonstrates NBR s ability to perform multi-hop reasoning, reasoning with uncertainty and reasoning about conflicting information. The second is supervised learning: a robust MNIST classifier for 4 and 9, which is the most challenging pair of digits. |
| Researcher Affiliation | Industry | Haifeng Qian IBM Research, Yorktown Heights, NY, USA qianhaifeng@us.ibm.com |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | Yes | Source code for training and in-ference is available at http://researcher.watson.ibm.com/group/10228 |
| Open Datasets | Yes | The second task is supervised learning: a robust MNIST classifier for 4 and 9, which is the most challenging pair of digits. |
| Dataset Splits | No | The paper mentions 'training images' and refers to the 'MNIST training set' but does not specify exact training/validation/test splits (e.g., percentages or counts) or refer to a standard split with a citation providing such details. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments were provided. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA 11.x) were provided. |
| Experiment Setup | Yes | The first seven Gi ( ) s are trained jointly with the following loss function: ... where s and β are hyperparameters. ... where ω is a hyperparameter. ... Iteration limit is 100 for PGD, 50K for BA, and 10K for CW and SCW. |