Neural Utility Functions

Authors: Porter Jenkins, Ahmad Farag, J. Stockton Jenkins, Huaxiu Yao, Suhang Wang, Zhenhui Li7917-7925

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that Neural Utility Functions can recover theoretical item relationships better than vanilla neural networks, analytically show existing neural networks are not quasi-concave and do not inherently reason about trade-offs, and that augmenting existing models with a utility loss function improves recommendation results. The Neural Utility Functions we propose are theoretically motivated, and yield strong empirical results.
Researcher Affiliation Academia Porter Jenkins 1, Ahmad Farag 2, J. Stockton Jenkins 3, Huaxiu Yao 1, Suhang Wang 1, Zhenhui Li 1 1Pennsylvania State University 2Georgia Tech University 3Brigham Young University
Pseudocode No The paper includes a 'Training Procedure' section and Figure 1 illustrating the process, but it does not contain a formally labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes For code see https://github.com/porterjenkins/neural-utilityfunctions
Open Datasets Yes We also evaluate recommendation performance on the Movielens 25M (Harper and Konstan 2015) and Amazon 18 (Mc Auley et al. 2015) datasets.
Dataset Splits No The paper specifies training and testing splits (e.g., '80% of the samples are allocated for training and 20% for testing'), but it does not explicitly mention a separate validation dataset split with specific percentages or counts.
Hardware Specification Yes All models were implemented in Pytorch (Paszke et al. 2019) and trained on a Google Deep Learning VM with 60 GB of RAM and two Tesla K80 GPU s.
Software Dependencies No The paper mentions 'Pytorch (Paszke et al. 2019)' as the implementation framework but does not provide a specific version number for Pytorch or any other software dependencies used.
Experiment Setup Yes We train all models using the Adam optimizer (Kingma and Lei Ba 2014). We select k = 5 for the size of the complement and supplement sets. All models were implemented in Pytorch (Paszke et al. 2019) and trained on a Google Deep Learning VM with 60 GB of RAM and two Tesla K80 GPU s. We train each model multiple times to estimate the variance.