MiSC: Mixed Strategies Crowdsourcing
Authors: Ching Yun Ko, Rui Lin, Shu Li, Ngai Wong
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we propose Mi SC (Mixed Strategies Crowdsourcing), a versatile framework integrating arbitrary conventional crowdsourcing and tensor completion techniques. In particular, we propose a novel iterative Tucker label aggregation algorithm that outperforms state-of-the-art methods in extensive experiments. ... Numerical experiments comparing the proposed Mi SC (mixed strategies crowdsourcing) with pure label aggregation methods are given in Section 5. |
| Researcher Affiliation | Academia | 1The University of Hong Kong, Hong Kong 2Nanjing University, Nanjing 210023, China {cyko, linrui, nwong}@eee.hku.hk, lis@smail.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 Truncated higher-order singular value decomposition (SVD), Algorithm 2 Higher-order orthogonal iteration, Algorithm 3 Mixed Strategies Crowdsourcing (Mi SC) |
| Open Source Code | No | The paper provides links for |
| Open Datasets | Yes | In this section, the proposed mixed complete-aggregate strategies crowdsourcing algorithms are compared with conventional label aggregation methods on six popular datasets, including Web dataset [Zhou et al., 2012], BM dataset [Mozafari et al., 2014], RTE dataset [Snow et al., 2008], Dog dataset [Deng et al., 2009; Zhou et al., 2012], Temp dataset [Snow et al., 2008], and Bluebirds dataset [Welinder et al., 2010]. |
| Dataset Splits | No | The paper mentions the datasets and their |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory). |
| Software Dependencies | No | The paper mentions |
| Experiment Setup | No | The paper describes the algorithms and their mathematical foundations, but it does not specify concrete experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, number of iterations/epochs), optimizer settings, or other training configurations. |