RelNN: A Deep Neural Model for Relational Learning
Authors: Seyed Mehran Kazemi, David Poole
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Initial experiments on eight tasks over three real-world datasets show that Rel NNs are promising models for relational learning. |
| Researcher Affiliation | Academia | Seyed Mehran Kazemi, David Poole University of British Columbia Vancouver, Canada {smkazemi, poole}@cs.ubc.ca |
| Pseudocode | No | The paper describes mathematical formulations and processes but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1Code: https://github.com/Mehran-k/Rel NN |
| Open Datasets | Yes | Our first dataset is the Movielens 1M dataset (Harper and Konstan 2015)... Our second dataset is from PAKDD15 gender prediction competition2... Our third dataset contains all Chinese and Mexican restaurants in Yelp dataset challenge3... |
| Dataset Splits | No | For all experiments, we split the data into 80/20 percent train/test. No explicit mention of a separate validation set. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We imposed a Laplacian prior on all our parameters (weights and numeric latent properties). For classification, we further regularized our model predictions towards the mean of the training set using a hyper-parameter λ as: Prob = λ mean + (1 λ) (Model Signal). |