Learning K-way D-dimensional Discrete Embedding for Hierarchical Data Visualization and Retrieval
Authors: Xiaoyuan Liang, Martin Renqiang Min, Hongyu Guo, Guiling Wang
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on synthetic and real-world datasets show that our proposed HKD embedding can effectively reveal the semantic structure of data via hierarchical data visualization and greatly reduce the search space of nearest neighbor retrieval while preserving high accuracy. |
| Researcher Affiliation | Collaboration | Xiaoyuan Liang1,2 , Martin Renqiang Min1 , Hongyu Guo3 and Guiling Wang2 1NEC Labs America Princeton 2New Jersey Institute of Technology 3National Research Council Canada |
| Pseudocode | No | The paper includes a framework illustration (Figure 1) but does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states that "The supplementary materials are available at https://sites.google.com/view/hkd-supplementary/" and mentions results are available there, but it does not explicitly state that the source code for the methodology is provided. |
| Open Datasets | Yes | We compare our approach against the state-of-the-art KD code learning method [Shu and Nakayama, 2018] on three datasets: a synthetic dataset, Poincar e embedding [Nickel and Kiela, 2017] on WORDNET [Miller, 1995] and embedding on the CIFAR100 dataset [Krizhevsky, 2009]. |
| Dataset Splits | Yes | The synthetic dataset is split into two parts, training and test. ... This dataset has 50000 training images and 10000 test images in total. |
| Hardware Specification | Yes | The whole model is built using Py Torch [Paszke et al., 2017] and is trained using a GTX 1080 Ti GPU. |
| Software Dependencies | No | The paper mentions "Py Torch [Paszke et al., 2017]" but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We train our network using RMSprop [Tieleman and Hinton, 2012] with learning rate of 0.0001 and mini-batch of size 128. |