Aligning Artificial Neural Networks and Ontologies towards Explainable AI

Authors: Manuel de Sousa Ribeiro, João Leite4932-4940

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using an image classification problem as testing ground, we discuss how to map the internal state of a neural network to the concepts of an ontology, examine whether the results obtained by the established mappings match our understanding of the mapped concepts, and analyze the justifications obtained through this method.
Researcher Affiliation Academia Manuel de Sousa Ribeiro, Jo ao Leite NOVA LINCS, School of Science and Technology, NOVA University Lisbon, Portugal mad.ribeiro@campus.fct.unl.pt, jleite@fct.unl.pt
Pseudocode Yes Figure 6: Input Reduce procedure for feature selection.
Open Source Code No The paper provides a citation to their dataset (Explainable Abstract Trains Dataset. Co RR abs/2012.12115) but does not provide a link or statement about the availability of the source code for their methodology.
Open Datasets Yes The images used in this paper’s dataset (de Sousa Ribeiro, Krippahl, and Leite 2020) were inspired by those developed by J. Larson and R. S. Michalski in (Larson and Michalski 1977) and use fragments of images from (Olmos and Kingdom 2004) as background.
Dataset Splits Yes Each neural network was trained with a balanced dataset of 25 000 images and achieves an accuracy of about 99% on a balanced test set of 10 000 images. All three neural networks possess a different architecture, although each possesses at least a set of convolutional, batch normalization, pooling, and dropout layers followed by a set of fully connected and batch normalization layers, with a single output neuron at the end.
Hardware Specification No The paper does not specify the hardware used for the experiments (e.g., specific GPU/CPU models, memory, or cloud instance types).
Software Dependencies No The paper mentions using the 'Adam (Kingma and Ba 2015)' optimization algorithm, but does not provide specific version numbers for any software libraries, frameworks (like TensorFlow or PyTorch), or programming languages used.
Experiment Setup Yes All neural networks were trained using the optimization algorithm Adam (Kingma and Ba 2015), with a learning rate of 0.001, the binary cross entropy as loss function, and early stopping with a patience value of 15 for mapping networks and 30 for convolutional neural networks.