Adaboost with Auto-Evaluation for Conversational Models

Authors: Juncen Li, Ping Luo, Ganbin Zhou, Fen Lin, Cheng Niu

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we do some empirical experiments to evaluate our method. We demonstrate that Aw E visibly boosts the performance of single model and also outperforms the other ensemble methods for conversational models.
Researcher Affiliation Collaboration Juncen Li1, Ping Luo2,3, Ganbin Zhou2,3, Fen Lin1, Cheng Niu1 1 We Chat Search Application Department, Tencent, China 2 Key Lab of Intelligent Information Processing of Chinese Academy of Sciences (CAS), Institute of Computing Technology, CAS, Beijing 100190, China 3 University of Chinese Academy of Sciences, Beijing 100049, China
Pseudocode Yes Algorithm 1 AUTO-EVALUATION ADABOOST
Open Source Code No The paper does not provide any statement or link indicating that the source code for the methodology is openly available.
Open Datasets No We collect nearly 14 million post-response pairs from Tencent Weibo . Removing spams and advertisements from that dataset, there are only 803,716 high-quality post-response pairs retained.
Dataset Splits Yes Table 1: Training pairs 773,315 Validation pairs 28,949 Test posts 1000
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. It only discusses the model architecture and training settings.
Software Dependencies No The paper mentions using RNN Encoder-Decoder with GRU and specific settings like beam size, but does not specify any software names with version numbers (e.g., Python, TensorFlow, PyTorch versions) needed to replicate the experiment.
Experiment Setup Yes We use 1-layer GRU with 512 cells for both the encoder and the decoder. Both embedding dimensions are set to 128. We initialize all parameters with the uniform distribution between -0.1 and 0.1. And We set the minibatch size to 256. We use beam search method to do the generation and we set beam size to 10.