Algorithmic Concept-Based Explainable Reasoning

Authors: Dobrik Georgiev, Pietro Barbiero, Dmitry Kazhdan, Petar Veličković, Pietro Lió6685-6693

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using three case studies we demonstrate that: (i) our proposed model is capable of accurately learning concepts and extracting propositional formulas based on the learned concepts for each target class; (ii) our concept-based GNN models achieve comparative performance with state-of-the-art models; (iii) we can derive global graph concepts, without explicitly providing any supervision on graph-level concepts. Our evaluation experiments demonstrate that all of our extracted rules strongly generalise to graphs of 5 larger size. For all tasks we did a 10:1:1 train:validation:testing split.
Researcher Affiliation Collaboration Dobrik Georgiev1, Pietro Barbiero1, Dmitry Kazhdan1, Petar Veliˇckovi c2, Pietro Li o1 1University of Cambridge 2Deep Mind
Pseudocode No The paper describes algorithmic steps and equations but does not present a clearly labeled pseudocode block or algorithm.
Open Source Code Yes All our training and data generation code is available at https://github.com/Hekpo Ma H/algorithmicconcepts-reasoning.
Open Datasets No The data for each task is generated by the corresponding deterministic algorithm. More details about the data generation are present in Appendix E. The paper describes generating custom datasets but does not provide concrete access information (link, DOI, or formal citation with authors/year) for public availability of these generated datasets.
Dataset Splits Yes For all tasks we did a 10:1:1 train:validation:testing split.
Hardware Specification No The paper mentions 'GPU memory constraints' but does not specify any particular GPU models, CPU models, or detailed hardware specifications used for running the experiments.
Software Dependencies No The paper mentions 'pytorch explain' (Barbiero et al. 2021) and 'Adam optimizer (Kingma and Ba 2015)' but does not provide specific version numbers for these or other software dependencies like Python or PyTorch.
Experiment Setup Yes We train our models using teacher forcing (Williams and Zipser 1989) for a fixed number of epochs (500 for BFS, 3000 for the parallel coloring, 100 for Kruskal s). For training we use Adam optimizer (Kingma and Ba 2015) with initial learning rate of 0.001 and batch size 32.