Online and Stochastic Learning with a Human Cognitive Bias
Authors: Hidekazu Oiwa, Hiroshi Nakagawa
AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results show the superiority of the derived algorithm for problems involving human cognition. |
| Researcher Affiliation | Academia | Hidekazu Oiwa The University of Tokyo and JSPS Research Fellow 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan hidekazu.oiwa@gmail.com Hiroshi Nakagawa The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan nakagawa@dl.itc.u-tokyo.ac.jp |
| Pseudocode | No | The paper states "The overall procedure of E-OGD is written in the supplementary material." but no pseudocode or algorithm block is present in the main text. |
| Open Source Code | No | The paper does not contain any concrete access information (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | We used five large-scale data sets from the LIBSVM binary data collections3." and "3http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/ binary.html" and "We set up a synthetic scene recognition task as a binary classification problem using indoor recognition datasets2." and "2http://web.mit.edu/torralba/www/indoor.html |
| Dataset Splits | No | The paper mentions training and testing sets, for example, "we randomly sample 90% data from the dataset and used them as a training set and remaining data as a test set" for the webspam-t dataset. However, it does not explicitly describe specific training/test/validation dataset splits, such as exact percentages for all splits or sample counts, that would be needed for full reproducibility of data partitioning, nor does it explicitly mention a validation set in general experimental setup. |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models, processor types, or memory amounts) used for running the experiments were provided. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment. |
| Experiment Setup | Yes | We used the logistic loss as loss functions. Each algorithm learned the weight vector from the training set through 1 iteration. Learning rates are t = / p t. We varied from 103 to 1.91 10 3 with common ratio 1/2 to obtain the appropriate step width for minimizing cumulative loss. |