Intelligent Advice Provisioning for Repeated Interaction
Authors: Priel Levy, David Sarne
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive evaluation of the proposed methods, involving hundreds of human participants, reveals that both methods meet their primary design goal (either an increased user profit or an increased user satisfaction from the advisor), while performing at least as good with the alternative goal, compared to having people perform with: (a) no advisor at all; (b) an advisor providing the theoretic-optimal advice; and (c) an effective suboptimal-advice-based advisor designed for the non-repeated variant of our experimental framework. |
| Researcher Affiliation | Academia | Priel Levy Bar-Ilan University, Israel levypri1@cs.biu.ac.il David Sarne Bar-Ilan University, Israel sarned@cs.biu.ac.il |
| Pseudocode | No | The paper describes the proposed methods (S-Gradual and S-Aggregate) in detail, but it does not include any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the public availability of source code for the described methodology. |
| Open Datasets | No | The paper states: 'We used 10 randomly generated sequences of x values (25 values in each) and each experiment was randomly assigned one of these sequences. Similarly, the actual car’s worth v associated with each of the ten x sequences in each experiment was taken from one of ten pre-drawn sets of 25 values within the range 0-1000. See supplementary material in the authors web-site for the full set of values used.' While the data is generated for the experiment and a supplementary link is mentioned, it's a specific 'authors web-site' and not a formalized public repository with specific attribution/citation as per the requirements for 'Yes'. |
| Dataset Splits | No | The paper describes the experimental setup involving human participants playing 25 games each, and how participants were assigned to treatments. However, it does not specify traditional dataset splits (e.g., training, validation, test sets) as commonly used for model evaluation. Instead, the 25 games constitute the experimental data collected for analysis. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. It only mentions the game was implemented as a 'java-script web-based application'. |
| Software Dependencies | No | The paper mentions the Car Purchasing Game (CPG) was implemented as a 'java-script web-based application'. However, it does not list any specific software libraries or their version numbers beyond the programming language itself. |
| Experiment Setup | Yes | The car’s worth a priori distribution was set to be uniform between 0 and 1000 (i.e., V = 1000). The S-Gradual advisor was implemented with an initial advice of $300 for x < 2 and $700 for x > 2. Each subsequent advice further converged towards the expected-profit-maximizing advice by $30. ... The Beta function parameters used were (α = 1, β = 4) and (α = 0.7, β = 4) for positive and negative differences, respectively... |