Embedding Symbolic Knowledge into Deep Networks
Authors: Yaqi Xie, Ziwei Xu, Mohan S. Kankanhalli, Kuldeep S Meel, Harold Soh
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our approach improves the performance of models trained to perform entailment checking and visual relation prediction. |
| Researcher Affiliation | Academia | Yaqi Xie , Ziwei Xu , Mohan S Kankanhalli, Kuldeep S. Meel, Harold Soh School of Computing National University of Singapore {yaqixie, ziwei-xu, mohan, meel, harold}@comp.nus.edu.sg |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We have made our source code available online at https://github.com/Ziwei XU/LENSR. |
| Open Datasets | Yes | We evaluated our method on VRD [28]. The VRD dataset contains 5,000 images with 100 object categories and 70 annotated predicates (relations). For each image, we sample pairs of objects and induce their spatial relations. |
| Dataset Splits | No | The paper does not provide specific training/validation/test split percentages or sample counts for the datasets used. |
| Hardware Specification | No | The paper mentions running experiments 'on a standard workstation' but does not specify any exact GPU/CPU models, processor types, or memory amounts. |
| Software Dependencies | No | The paper mentions tools and libraries such as 'python-sat [27]', 'c2d [17]', 'GLo Ve embeddings [29]', 'Res Net', and 'Adam [30]', but it does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | For this test, each LENSR model comprised 3 layers, with 50 hidden units per layer. LENSR produces 100dimension embedding for each input formula/assignment. The neural network used for classification is a 2-layer perceptron with 150 hidden units. We set m = 1.0 in Eqn. 3 and λr = 0.1 in Eqn. 4. [...] h is a MLP with 2 layers and 512 hidden units. [...] We optimized this objective using Adam [30] with learning rate 10 3. [...] λ = 0.1 is a trade-off factor. |