Characterization of the Convex Łukasiewicz Fragment for Learning From Constraints
Authors: Francesco Giannini, Michelangelo Diligenti, Marco Gori, Marco Maggini
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The proposed approach is evaluated on a classification task to show how the use of the logical rules can be effective to improve the accuracy of a trained classifier. Section 4 provides an applicative example showing the effect of rules expressed by the proposed fragment in a transductive classification task. |
| Researcher Affiliation | Academia | Francesco Giannini, Michelangelo Diligenti, Marco Gori, Marco Maggini Department of Information Engineering and Mathematical Sciences University of Siena Siena, via Roma 56, Italy {fgiannini,diligmic,marco,maggini}@diism.unisi.it |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing code or providing a link to source code for the methodology described. |
| Open Datasets | No | The paper mentions using images from the ImageNet database and a benchmark from Winston (Winston and Horn 1986), but it does not provide a direct link, DOI, specific repository name, or proper bibliographic citation to the exact processed dataset used for their experiments. |
| Dataset Splits | No | The paper mentions varying the percentage of training supervisions (between 10% and 90%) and evaluating on 'test labels', but it does not explicitly provide specific train/validation/test dataset splits, percentages, or sample counts needed to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a feedforward neural network and Resilient backpropagation, but it does not provide specific software details like library names with version numbers (e.g., Python 3.8, PyTorch 1.9) needed to replicate the experiment. |
| Experiment Setup | Yes | A feedforward neural network having one single output neuron and a single hidden layer containing 30 neurons was trained... The single output neuron used a sigmoidal activation function, while the hidden neurons used a rectified linear activation function... trained against the training set labels using a quadratic cost function on the output. Resilient backpropagation... was executed for 500 full-batch iterations and using 0.0001 as initial learning rate for all the weights. |