Structured Probabilistic End-to-End Learning from Crowds

Authors: Zhijun Chen, Huimin Wang, Hailong Sun, Pengpeng Chen, Tao Han, Xudong Liu, Jie Yang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive validation on realworld datasets shows that our methods improve the state-of-the-art.
Researcher Affiliation Academia Zhijun Chen1,2 , Huimin Wang1,2 , Hailong Sun1,2 , Pengpeng Chen1,2 , Tao Han1,2 , Xudong Liu1,2 and Jie Yang3 1SKLSDE Lab, School of Computer Science and Engineering, Beihang University, China 2Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, China 3Web Information Systems, Delft University of Technology, Netherlands
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the described methodology or links to a code repository.
Open Datasets Yes We performed experiments on two real-world datasets labeled from Amazon Mechanical Turk (AMT), i.e. the Sentiment Polarity Classification (SPC) dataset [Rodrigues et al., 2013] and the Music Genre Classification (MGC) dataset [Rodrigues et al., 2013].
Dataset Splits No The paper specifies test set sizes but does not provide details on training and validation dataset splits (e.g., percentages or exact sample counts for training/validation sets) beyond mentioning a batch size and epoch number for training.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU or CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions software components like 'Adam stochastic optimization' and 'MLP' but does not provide specific version numbers for any programming languages or libraries.
Experiment Setup Yes For the SPC dataset, we set the classifier in both Spee LFC and Spee LFC-D as an MLP with one hidden layer (with 1200 units, Re LU activations), using 50% dropout and Adam stochastic optimization [Kingma and Ba, 2014]. The learning rate is 0.0001, batch-size is 64, and epoch number is 200. In addition, the function fΘNN2(x(i)) in Spee LFC-D is also an MLP with one hidden layer (with 128 units, Re LU activations, 50% dropout). And the values of the hyperparameters λ1 and λ2 in Spee LFC-D are 0.001 and 100, respectively. In Spee LFC, the values on the diagonal elements of Π(j) (j = 1, . . . , J) were initially set to 1.4, and the other values were set to 1. α(j) (j = 1, . . . , J) in Spee LFC-D was initially set to 0.028.