Name Nationality Classification with Recurrent Neural Networks
Authors: Jinhyuk Lee, Hyunjae Kim, Miyoung Ko, Donghee Choi, Jaehoon Choi, Jaewoo Kang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluation of Olympic record data shows that our model achieves greater accuracy than previous feature based approaches in nationality prediction tasks. We also evaluate our proposed model and baseline models on name ethnicity classification task, again achieving better or comparable performances. |
| Researcher Affiliation | Collaboration | Korea University Sogang University Konolabs, Inc. |
| Pseudocode | No | The paper describes the model using mathematical equations for RNNs and LSTMs, but it does not provide pseudocode or a clearly labeled algorithm block. |
| Open Source Code | Yes | Our source code and datasets for the experiments are publicly available on the Web. 1https://github.com/63coldnoodle/ethnicity-tensorflow |
| Open Datasets | Yes | We crawled Olympic records data from official Olympic website2. Total 17721 pairs of personal names and nationalities were collected and statistics of the dataset is in Table 2. 2http://www.olympic.org |
| Dataset Splits | Yes | Raw Cleaned ... # of training data 10633 10592 # of validation data 3545 3531 # of testing data 3543 3530 |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., CPU, GPU models, memory) used for the experiments. |
| Software Dependencies | No | The paper mentions using 'Sklearn library' and 'Tensorflow' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | Using LSTM model, we performed parameter validation on 5 different hyper parameters as described in Table 4. Best performing parameter set on validation phase was used for the LSTM model. We used Adam optimizer [Kingma and Ba, 2014] for the LSTM model and learning rate decay was set to 0.99 for every 100 iterations. Additionally, mini batch of size 1000 was used. Norms of gradients were clipped at 5. |