DeepPSL: End-to-End Perception and Reasoning
Authors: Sridhar Dasaratha, Sai Akhil Puranam, Karmvir Singh Phogat, Sunil Reddy Tiyyagura, Nigel P. Duffy
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluation on three different tasks demonstrates that Deep PSL significantly outperforms state-of-the-art neuro-symbolic methods on scalability while achieving comparable or better accuracy. |
| Researcher Affiliation | Industry | Sridhar Dasaratha1 , Sai Akhil Puranam1 , Karmvir Singh Phogat1 , Sunil Reddy Tiyyagura1 , Nigel P. Duffy2 1 EY Global Delivery Services India LLP 2 Ernst & Young (EY) LLP USA |
| Pseudocode | Yes | Algorithm 1 Joint optimization: backpropagating loss to the neural network |
| Open Source Code | No | The paper does not include an explicit statement about releasing the source code for Deep PSL or provide a direct link to a code repository for their methodology. |
| Open Datasets | Yes | The goal of this task is to predict the sum of digits present in two MNIST images [Manhaeve et al., 2018]. ... We use data from the Cora and Citeseer scientific datasets [Yang et al., 2016]. |
| Dataset Splits | Yes | Each of these tasks contains train, validation and test splits in the corresponding dataset(s). ... Table 4: Dataset splits for semi-supervised classification task (e.g., Cora: 140/ 500/ 1000 Train/ Val/ Test). |
| Hardware Specification | Yes | The experiments are performed on a Mac Book Pro with 2.6GHz Intel i7 processor having 6 cores. |
| Software Dependencies | No | The paper mentions software like TensorFlow in the context of related work but does not provide specific version numbers for any software dependencies used in their own experimental setup. |
| Experiment Setup | Yes | Table 1: Hyperparameters used for Deep PSL [lists Optimizer, Learning rate, Max iterations, α, µ, Update steps, Epochs]. ... The CNN consists of two convolution (CONV) layers with 32 and 64 filters... This is followed by two fully connected layers with 128 and 10 nodes... A batch size of 16 is used for training. |