GANTEE: Generative Adversarial Network for Taxonomy Enterance Evaluation
Authors: Zhouhong Gu, Sihang Jiang, Jingping Liu, Yanghua Xiao, Hongwei Feng, Zhixu Li, Jiaqing Liang, Zhong Jian
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on three real-world large-scale datasets with two different languages show that GANTEE improves the performance of the existing taxonomy expansion methods in both effectiveness and efficiency. ... Extensive experiments have been conducted to verify the superiority of GANTEE on three datasets in two languages. |
| Researcher Affiliation | Collaboration | Zhouhong Gu1,4, Sihang Jiang1, Jingping Liu2*, Yanghua Xiao1,3*, Hongwei Feng1, Zhixu Li1, Jiaqing Liang4, Jian Zhong5 1Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, China 2School of Information Science and Engineering, East China University of Science and Technology 3Fudan-Aishu Cognitive Intelligence Joint Research Center 4School of Data Science, Fudan University 5HUAWEI CBG Edu AI Lab |
| Pseudocode | No | The paper describes methods in text and equations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | Yes | Microsoft Academic Graph on Field-of-Study (MAGFo S): This taxonomy (Sinha et al. 2015) consists of public Field-of-Study Taxonomy(Fo S). ... Microsoft Academic Graph on Computer-Science (MAG-CS): Following the work of Taxo Expan (Shen et al. 2020), we construct MAG-CS based on the subgraph of MAG-Fo S related to the Computer Science domain. CN-Probase: This is an open chinese general concept taxonomy CN-Probase (Chen et al. 2019). |
| Dataset Splits | Yes | To ensure the validity and test data won t be trained, we randomly mask 20% of leaf concepts (along with their relations) for validation and testing. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types used for running experiments. |
| Software Dependencies | No | The paper mentions software components like GPT-2, Transformer, LSTM, BERT, and uses a PyTorch scheduler, but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | Parameter Setting For learning-based methods, we use SGD optimizer with initial learning rate 0.0001 and Reduce LROn Plateau1 scheduler with ten patience epochs. During model training, the batch size and negative sample size are set to 16 and 256 in the overall performance experiments, respectively. We set the epochs to be ten and use two layers of Graph Attention Layer with 8 and 1 attention head and 100 dimensions of the position dim. |