Empower Distantly Supervised Relation Extraction with Collaborative Adversarial Training

Authors: Tao Chen, Haochen Shi, Liyuan Liu, Siliang Tang, Jian Shao, Zhigang Chen, Yueting Zhuang12675-12682

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on NYT (Riedel, Yao, and Mc Callum 2010), the public DSRE benchmark. MULTICAST leads to consistent improvements over the previous stateof-the-art systems. It demonstrates the effectiveness of MULTICAST and validates our intuition that the data utilization issue is the key bottleneck. We further conduct ablation studies to verify that MULTICAST coordinates different modules effectively.
Researcher Affiliation Collaboration Tao Chen1, Haochen Shi1, Liyuan Liu2, Siliang Tang1*, Jian Shao1, Zhigang Chen3, Yueting Zhuang1 1Zhejiang University 2University of Illinois at Urbana Champaign 3i FLYTEK Research
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper states: 'Models marked with * are quoted from original papers, since there are no open-source codes released.' and does not provide a link to the code for MULTICAST, indicating it's not open-source.
Open Datasets Yes We evaluate our model on the widely used DSRE dataset NYT (Riedel, Yao, and Mc Callum 2010), which aligns Freebase (Bollacker et al. 2008) entity relation with New York Times corpus.
Dataset Splits No The paper describes the training and test sets but does not explicitly provide details about a validation dataset split for reproduction purposes.
Hardware Specification No The paper does not provide specific hardware details (like GPU or CPU models, or memory) used for running its experiments.
Software Dependencies No The paper mentions using 'Open NRE (Han et al. 2019)' but does not provide specific version numbers for this or any other software dependencies, which are required for full reproducibility.
Experiment Setup No The paper does not explicitly provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.