A Scalable Reasoning and Learning Approach for Neural-Symbolic Stream Fusion

Authors: Danh Le-Phuoc, Thomas Eiter, Anh Le-Tuan4996-5005

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments with our first prototype on multi-object tracking benchmarks for autonomous driving and traffic monitoring show that our flexible approach can considerably improve both accuracy and processing throughput compared to the DNN-based counterparts. and Experimental Evaluation
Researcher Affiliation Academia Danh Le-Phuoc,1 Thomas Eiter, 2 Anh Le-Tuan 1 1 Open Distributed Systems, Technical University Berlin, Germany 2 Institute of Logic and Computation, Vienna University of Technology (TU Wien), Austria
Pseudocode Yes Algorithm 1 Incremental Reasoning and Algorithm 2 Parallel Learning
Open Source Code Yes All source code and experiments will be released at https://github.com/cqels/SSR/ as a part of the open source project CQELS Framework (Le-Phuoc et al. 2011).
Open Datasets Yes Our experiments with real data on traffic monitoring from the AI City Challenge (Tang et al. 2019) and autonomous driving from the KITTI dataset (Geiger, Lenz, and Urtasun 2012) show that our approach can deliver not only better accuracy (5%-15%) but also higher processing throughput than traditional DNN counterparts.
Dataset Splits No The paper mentions using 13 cameras for training and 12 for evaluation for the AIC dataset, and mentions KITTI, but does not explicitly provide training/validation/test splits or reference predefined validation splits for reproducibility.
Hardware Specification Yes We have conducted all experiments on a workstation with 2 Intel Xeon Silver 4114 processors having 10 physical cores each, 1TB RAM, 2 NVIDIA Tesla V100 16GB running Centos 7.0.
Software Dependencies Yes We use the Java native interface to wrap C/C++ libraries of Clingo 5.4.0 as ASP Solver and NVidia CUDA 10.2 as DNN inference engine.
Experiment Setup Yes In weight learning, we used starting weights 1, δ = 0.001, and learning rate λ = 0.01 for both learning pipelines.