Psychological Forest: Predicting Human Behavior

Authors: Ori Plonsky, Ido Erev, Tamir Hazan, Moshe Tennenholtz

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare the predictive accuracy of our approach with current practice and show it outperforms the state-of-the-art for our data. Our experiments focus on the aggregate human choice behavior in different choice problems, and on its progression over time. To that end, we use the CPC data (available, with more detailed accounts, at the CPC s website). Table 3 exhibits the results of the various algorithm-feature combinations in predicting behavior in the test set.
Researcher Affiliation Academia Technion Israel Institute of Technology, Haifa, 3200003, Israel 1plonsky@campus.technion.ac.il 2erev@tx.technion.ac.il 3tamir.hazan@technion.ac.il 4moshet@ie.technion.ac.il
Pseudocode No The paper describes features and formulas but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper mentions the CPC website for data but does not provide a link or statement for the open-sourcing of their own code implementation.
Open Datasets Yes We use the CPC data (available, with more detailed accounts, at the CPC s website). The competition s website: http://departments.agri.huji.ac.il/cpc2015
Dataset Splits Yes In the CPC, 90 problems served as training data and the other 60 served as the test data. We trained each algorithm-features combination on the CPC s training data of 90 choice settings (each consisting five time-points, or blocks) and test its predictive value in the CPC s test data of 60 different choice settings. Performance is thus measured according to MSE of 300 choice rates in the range [0, 1]. ...with values tuned to fit the training data best (according to 10 rounds of 10-fold cross validation).
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions software packages like "R package randomForest", "R package neuralnet", "R package e1071", and "R package kknn", but does not specify their version numbers.
Experiment Setup Yes The algorithms tested include random forest (using R package randomForest); neural nets (using R package neuralnet) with one hidden layer and either 3, 6, or 12 nodes and with two hidden layers and either 3 or 6 nodes in each layer; SVM (using R package e1071) with radial and polynomial kernels; and kNN (using R package kknn) with 1, 3, or 5 nearest neighbors. We trained each algorithm-features combination with both the packages default hyper-parameter values and with values tuned to fit the training data best (according to 10 rounds of 10-fold cross validation).