Hyperbolic Interaction Model for Hierarchical Multi-Label Classification
Authors: Boli Chen, Xin Huang, Lin Xiao, Zixin Cai, Liping Jing7496-7503
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on three benchmark datasets. The results have demonstrated that the new model can realistically capture the complex data structures and further improve the performance for HMLC comparing with the state-of-the-art methods. |
| Researcher Affiliation | Academia | Boli Chen, Xin Huang, Lin Xiao, Zixin Cai, Liping Jing Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University, Beijing, China |
| Pseudocode | No | The paper describes the model architecture and mathematical operations (e.g., M obius addition, Poincar e distance), but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | To facilitate future research, our code is publicly available. https://github.com/bcol23/Hyper IM |
| Open Datasets | Yes | Experiments are carried out on three publicly available multi-label text classification datasets, including the small-scale RCV1 (Lewis et al. 2004), the middle-scale Zhihu1 and the large-scale Wiki LSHTC (Partalas et al. 2015). |
| Dataset Splits | No | Table 1 provides 'Ntrain' and 'Ntest' for the datasets, indicating training and test instances. However, it does not explicitly mention a separate validation split or provide its sample counts/percentages. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions various components and models (e.g., 'RNN-based word encoder', 'GRU architecture', 'Poincar e Glo Ve') but does not specify version numbers for any programming languages, libraries, or frameworks used in the implementation (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | The embedding dimension for Hyper IM is 75 2D as it generally outperforms the baselines. ... we propose to use a negative sampling method to improve the scalability during training. |