A Multi-Implicit Neural Representation for Fonts

Authors: Pradyumna Reddy, Zhifei Zhang, Zhaowen Wang, Matthew Fisher, Hailin Jin, Niloy Mitra

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate the proposed representation for various tasks including reconstruction, interpolation, and synthesis to demonstrate clear advantages with existing alternatives.
Researcher Affiliation Collaboration 1University College London 2Adobe Research
Pseudocode No The paper describes methods in text and provides diagrams, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about code release or links to source code repositories.
Open Datasets Yes For a fair comparison, we train all the algorithms on the same dataset used by Im2Vec [20], which consists of 12,505 images. ... We train on 1,000 font families, i.e., 52,000 images, and test on 100 font families.
Dataset Splits No The paper mentions training and testing but does not explicitly describe a separate validation split with specific percentages or sample counts for hyperparameter tuning or model selection.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., GPU models, CPU types).
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For local corner template loss, we first perform corner detection. A corner is defined as a local where two curves intersect at an angle less than a threshold (the threshold is 3rad or 171 in our experiments). The template size is 7 × 7 corresponding to the image size of 128 × 128. ... we set the initial anti-aliasing range to be the whole image range and slowly shrink it to k w 1 during the training, where w is image width, and k = 4 in our experiments.