HiGAN: Handwriting Imitation Conditioned on Arbitrary-Length Texts and Disentangled Styles

Authors: Ji Gan, Weiqiang Wang7484-7492

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on handwriting benchmarks validate our superiority in terms of visual quality and scalability when comparing to the state-of-the-art methods for handwritten word/text synthesis.
Researcher Affiliation Academia Ji Gan, Weiqiang Wang * School of Computer Science and Technology, University of Chinese Academy of Sciences ganji15@mails.ucas.ac.cn, wqwang@ucas.ac.cn
Pseudocode No The paper describes the components and training objectives of Hi GAN but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code and pre-trained models can be found at https://github.com/ganji15/Hi GAN.
Open Datasets Yes To evaluate our Hi GAN, we use the following two handwriting benchmarks: IAM (Marti and Bunke 2002) dataset... CVL (Florian Kleber and Sablatnig. 2013) dataset...
Dataset Splits Yes IAM (Marti and Bunke 2002) dataset consists of 9862 text lines with around 63K English words, written by 500 different writers. The dataset provides the official splits with mutually exclusive authors. In our settings, only the training & validate sets are used for training GANs.
Hardware Specification Yes Experiments are conducted on a Dell workstation with an Intel(R) Xeon(R) CPU E5-2630 v4@2.20GHz, 32 GB RAM, and NVIDIA Quadro P5000 GPU 16GB.
Software Dependencies No The paper mentions 'Adam (Diederik and Ba 2015)' as the optimizer but does not specify versions for software dependencies like programming languages or libraries.
Experiment Setup Yes The model is optimized using Adam (Diederik and Ba 2015) with a learning rate of 0.0001 and (β1, β2) = (0.5, 0.999). The batch size is set to 16 for all experiments, and all models are trained over 100K iterations.