Towards Fully Automated Manga Translation

Authors: Ryota Hinami, Shonosuke Ishiwatari, Kazuhiko Yasuda, Yusuke Matsui12998-13008

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To confirm the effectiveness of our models and Manga corpus, we ran translation experiments on the Open Mantra dataset. ... Table 1 shows the results of the manual and automatic evaluation.
Researcher Affiliation Collaboration Ryota Hinami1*, Shonosuke Ishiwatari1*, Kazuhiko Yasuda2, Yusuke Matsui2 1 Mantra Inc. 2 The University of Tokyo {hinami,ishiwatari}@mantra.co.jp matsui@hal.t.u-tokyo.ac.jp, yasuda@tkl.iis.u-tokyo.ac.jp
Pseudocode No The paper describes a proposed framework and pipeline using descriptive text and figures (Fig. 2, Fig. 5), but it does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper describes its proposed methods and system, but it does not provide a direct link to the source code for its methodology or explicitly state its release for public access. The provided link (https://github.com/mantra-inc/open-mantra-dataset) is for a dataset only.
Open Datasets Yes Open Mantra: ... This dataset is publicly available for research purposes.1 [Footnote 1: https://github.com/mantra-inc/open-mantra-dataset]... In addition, we used Open Subtitles2018 (OS18) (Lison, Tiedemann, and Kouylekov 2018)...
Dataset Splits Yes We randomly excluded 2,000 pairs for validation purposes. In addition, we used Open Subtitles2018 (OS18) ... We excluded 3K sentences for the validation and 5K for the test and used the remaining 2M sentences for training.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions several models and frameworks (e.g., Faster R-CNN, ResNet101, Transformer, EdgeConnect) but does not specify software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x).
Experiment Setup No The paper states that for the Transformer model, 'we chose the Transformer (big) model and set its default parameters in accordance with (Vaswani et al. 2017)'. This refers to external work for parameters rather than explicitly listing the full experimental setup or hyperparameters within the text.