Neuro-symbolic Learning Yielding Logical Constraints

Authors: Zenan Li, Yunpeng Huang, Zhaoyu Li, Yuan Yao, Jingwei Xu, Taolue Chen, Xiaoxing Ma, Jian Lu

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations with four tasks, viz. Visual Sudoku Solving, Self-Driving Path Planning, Chained XOR, and Nonograms, demonstrate the new learning capability and the significant performance superiority of the proposed framework.
Researcher Affiliation Academia Zenan Li1 Yunpeng Huang1 Zhaoyu Li2 Yuan Yao1 Jingwei Xu1 Taolue Chen3 Xiaoxing Ma1 Jian Lü1 1State Key Lab of Novel Software Technology, Nanjing University, China 2Department of Computer Science, University of Toronto, Canada 3School of Computing and Mathematical Sciences, Birkbeck, University of London, UK
Pseudocode Yes Algorithm 1 Neuro-symbolic Learning Procedure
Open Source Code Yes The code is available at https://github.com/Lizn-zn/Nesy-Programming.
Open Datasets Yes We consider two 9 x 9 visual Sudo Ku solving datasets, i.e., the SATNet dataset [Wang et al., 2019, Topan et al., 2021] and the RRN dataset [Yang et al., 2023]... We simulate the self-driving path planning task based on two datasets, i.e., Kitti [Geiger et al., 2013] and nu Scenes [Caesar et al., 2020].
Dataset Splits No The paper mentions "9K/1K training/test examples" for datasets but does not explicitly provide details on validation splits (e.g., percentages or counts for a validation set).
Hardware Specification Yes The experiments were conducted on a GPU server with two Intel Xeon Gold 5118 CPU@2.30GHz, 400GB RAM, and 9 Ge Force RTX 2080 Ti GPUs.
Software Dependencies No The paper states "We implemented our approach via the Py Torch DL framework" and "We use Z3 SMT (Max SAT) solver [Moura and Bjørner, 2008]". While software is mentioned, specific version numbers for PyTorch or Z3 are not provided.
Experiment Setup Yes Hyperparameter tuning. Some hyperparameters are introduced in our framework. In Table 3 we summarize the (hyper-)parameters, together with their corresponding initialization or update strategies. ... Param. Description Setting ... α Fixed to α = 0.5 λ Fixed to λ = 0.1 t1/t2 Increased per epoch η Adam schedule γ Adaptively set (γ = 0.001 by default)