Softened Symbol Grounding for Neuro-symbolic Systems

Authors: Zenan Li, Yuan Yao, Taolue Chen, Jingwei Xu, Chun Cao, Xiaoxing Ma, Jian L\"{u}

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments with three representative neuro-symbolic learning tasks demonstrate that, owing to its superior symbol grounding capability, our framework successfully solves problems well beyond the frontier of the existing proposals.
Researcher Affiliation Academia Zenan Li1, Yuan Yao1, Taolue Chen2, Jingwei Xu1, Chun Cao1, Xiaoxing Ma1, Jian L u1 1State Key Lab of Novel Software Technology, Nanjing University, China 2Department of Computer Science, Birkbeck, University of London, UK
Pseudocode Yes Algorithm 1 Neural Symbolic Learning Procedure
Open Source Code Yes The code is available at https://github.com/Soft Wiser-group/Soften-Ne Sy-learning.
Open Datasets Yes We first evaluate our approach on the handwritten formula dataset provided by Li et al. (2020). We next evaluate our approach on a visual Sudoku classification task (Wang et al., 2019). We randomly generate 3K/1K graphs as training/test set through Network X (Hagberg et al., 2008).
Dataset Splits No No explicit mention of a validation set split or its size/percentage was found. The paper primarily discusses training and test sets.
Hardware Specification No The paper does not provide specific details on the hardware used for experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The projection operator is specific to each task, and the corresponding inverse projection operator is implemented by the Z3 SMT solver (Moura & Bjørner, 2008). Through parallel computation (Joblib Development Team, 2020). No version numbers are given for Z3 or Joblib.
Experiment Setup Yes For all tasks, the batch size was set to 64. For all comparison methods and our Stage I algorithm, the number of epochs is fixed to 1,000. For our Stage II algorithm, the number of epochs is fixed at 30. We fix T = 10 in Alg. 1, i.e., conducting ten random walk steps before one gradient descent step. ...RL, MAPO, and SSL conducted the Adam algorithm with learning rate 5e-4. For our approaches, we used the SGD algorithm with learning rate 0.1 in Stage I, and the Adam algorithm with learning rate 1e-3.