Logic Tensor Networks for Semantic Image Interpretation
Authors: Ivan Donadello, Luciano Serafini, Artur d'Avila Garcez
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed approach is evaluated on a standard image processing benchmark. Experiments show that background knowledge in the form of logical constraints can improve the performance of purely data-driven approaches, including the state-of-the-art Fast Region-based Convolutional Neural Networks (Fast R-CNN). |
| Researcher Affiliation | Academia | Ivan Donadello Fondazione Bruno Kessler and University of Trento Trento, Italy donadello@fbk.eu Luciano Serafini Fondazione Bruno Kessler Via Sommarive 18, I-38123 Trento, Italy serafini@fbk.eu Artur d Avila Garcez City, University of London Northampton Square London EC1V 0HB, UK a.garcez@city.ac.uk |
| Pseudocode | No | The paper does not include pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | LTN has been implemented as a Google TENSORFLOWT Mlibrary. Code, part Of ontology, and dataset are available at https://gitlab.fbk.eu/donadello/LTN_IJCAI17 |
| Open Datasets | Yes | We use the PASCAL-PART-dataset that contains 10103 images with bounding boxes annotated with object-types and the part-of relation defined between pairs of bounding boxes. Labels are divided into three main groups: animals, vehicles and indoor objects, with their corresponding parts and partof label. ... The images were then split into a training set with 80%, and a test set with 20% of the images, maintaining the same proportion of the number of bounding boxes for each label. |
| Dataset Splits | No | The paper specifies a training and test set split but does not mention a separate validation set split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | LTN has been implemented as a Google TENSORFLOWT Mlibrary. While TENSORFLOW is mentioned, no specific version number for it or other software dependencies is provided. |
| Experiment Setup | Yes | The LTNs were set up with tensor of k = 6 layers and a regularization parameter λ = 10 10. We chose Lukasiewicz s T-norm (µ(a, b) = max(0, a + b 1)) and use the harmonic mean as aggregation operator. We ran 1000 training epochs of the RMSProp learning algorithm available in TENSORFLOWT M. |