Interactive Learning with Proactive Cognition Enhancement for Crowd Workers

Authors: Jing Zhang, Huihui Wang, Shunmei Meng, Victor S. Sheng540-547

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on three realworld learning tasks demonstrate that our ILPCE significantly outperforms five representative state-of-the-art methods. We used three image classification datasets that are not easy for human experts in our experiments.
Researcher Affiliation Academia Jing Zhang,1 Huihui Wang,1 Shunmei Meng,1 Victor S. Sheng2 1School of Computer Science and Engineering, Nanjing University of Science and Technology 200 Xiaolingwei Street, Nanjing 210094, China 2Department of Computer Science, Texas Tech University 2500 Broadway, Lubbock, TX 79409, U.S.A.
Pseudocode Yes Algorithm 1 ILPCE Input: DU, {ν, Λ, α, β}, {b, c, τ}, m, n Output: learning model h(x)
Open Source Code No The paper does not provide a direct link to a source code repository or explicitly state that the code for their methodology is available.
Open Datasets Yes We used three image classification datasets that are not easy for human experts in our experiments. (1) Dataset Butterflies in (Mac Aodha et al. 2018)... (2) Birds species classification on the Caltech-UCSD Birds 200 dataset (Welinder et al. 2010)... (3) Dataset Dogs in (Bi et al. 2014) contains 10 dog species extracted from the Image Net (Deng et al. 2009)...
Dataset Splits Yes For all the datasets, we held out 30% images from each class as the test sample for learning models. The remainder 70% were used for the (inter)active learning processes.
Hardware Specification No The paper does not specify the hardware used (e.g., GPU/CPU models, memory) to run the experiments.
Software Dependencies No The paper mentions "All learning models are trained with SIFT (Lowe 2004) features using logistic regression with L2 regularization" but does not specify version numbers for any software, libraries, or dependencies.
Experiment Setup Yes The parameter settings of our method are as follows: Each element in hyper-parameters ν and Λ is set to 1/K, where K is the number of classes. That is, we use uniform priors. The hyper-parameters (α, β) for the Gamma distribution is set to (5.0, 1.0)... The parameters {b, c, τ} for cognition model are set to {5.07, 2.96, 4.80}... Finally, as mentioned before, we set m = 5 and n = 10.