Aggregating Crowd Wisdom with Side Information via a Clustering-based Label-aware Autoencoder
Authors: Li'ang Yin, Yunfei Liu, Weinan Zhang, Yong Yu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world tasks demonstrate the significant improvement of CLA compared with the state-of-the-art aggregation algorithms. |
| Researcher Affiliation | Academia | Li ang Yin, Yunfei Liu, Weinan Zhang, Yong Yu Shanghai Jiao Tong University, No.800 Dongchuan Road, Shanghai, China {yinla,liuyunfei,wnzhang,yyu}@apex.sjtu.edu.cn |
| Pseudocode | No | The paper describes algorithms and processes textually but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Demo code is at https://github.com/coverdark/cla demo |
| Open Datasets | Yes | Reuters contains a document categorization task... [Rodrigues et al., 2017]. CUB-200-2010 dataset contains tasks to label local characteristics for 6,033 bird images [Welinder et al., 2010]. |
| Dataset Splits | Yes | Hyperparameter search adopts a similar manner with LAA by splitting a dataset into a training set and a validation set [Yin et al., 2017]. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' but does not specify a version number or other key software dependencies with their versions. |
| Experiment Setup | Yes | Sampling time T = 5. ... Here the learning rate is 0.001. Training is stable and usually achieves desirable inference accuracy after 1,500 epochs. |