Predicting Peer-to-Peer Loan Rates Using Bayesian Non-Linear Regression
Authors: Zsolt Bitvai, Trevor Cohn
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that the peer-to-peer market is predictable, and identify a small set of key factors with high predictive power. Our approach outperforms baseline methods for predicting market rates, and generates substantial profit in a trading simulation. The results in Table 3 show that the model confidently beats the baselines (1.352), linear regression (1.004), as well as SVR (0.970), with a root mean square error of 0.596. |
| Researcher Affiliation | Academia | Zsolt Bitvai University of Sheffield Sheffield, United Kingdom z.bitvai@shef.ac.uk Trevor Cohn University of Melbourne Melbourne, Australia t.cohn@unimelb.edu.au |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We have gathered around 3,500 loan applications between 2013 and 2014. The paper does not provide any access information (link, DOI, repository, or citation to a public source) for this dataset. |
| Dataset Splits | Yes | We fit a Gaussian Process over the normalized data with 4-fold crossvalidation and measure the root mean square error and marginal likelihood. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers, such as programming languages, libraries, or specific frameworks. |
| Experiment Setup | No | The paper mentions that hyperparameters (bias terms, σd, l, α values, σ2n) are fit by optimizing marginal likelihood and that grid search is used for tuning, but it does not provide their specific values or ranges, nor details on the gradient ascent (e.g., learning rate). |