Optimize & Reduce: A Top-Down Approach for Image Vectorization
Authors: Or Hirschorn, Amir Jevnisek, Shai Avidan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments on hundreds of images, we demonstrate that our method is domain agnostic and outperforms existing works in both reconstruction and perceptual quality for a fixed number of shapes. |
| Researcher Affiliation | Academia | Tel-Aviv University, Israel orhirschorn@mail.tau.ac.il, amirjevn@mail.tau.ac.il, avidan@eng.tau.ac.il |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. The method is described in prose and illustrated with diagrams (Figures 3 and 4) and mathematical equations. |
| Open Source Code | Yes | Our code is publicly available: https://github.com/ajevnisek/optimize-and-reduce |
| Open Datasets | Yes | For evaluation, we collect five datasets to cover a range of image complexities... All datasets will be made publicly available. EMOJI dataset. We take two snapshots of the Noto Emoji project (Google 2022). |
| Dataset Splits | No | The paper does not provide specific dataset split information (e.g., percentages or exact counts for training, validation, and test sets). It mentions the datasets used and general experimental settings but not the data partitioning methodology. |
| Hardware Specification | Yes | Runtime was measured on an NVIDIA RTX A5000 GPU. |
| Software Dependencies | No | The paper states, 'Our code is written in Py Torch (Paszke et al. 2017),' but does not provide a specific version number for PyTorch or any other software dependencies crucial for reproduction. |
| Experiment Setup | Yes | We use three Reduce steps that follow an exponential decay schedule (i.e., the number of shapes is halved in each iteration). For the qualitative experiments, we use MSE for optimization and L1 for reduction. All quantitative evaluations use L1 with our geometric loss for optimization. For reduction, we use L1 and Clip on Midjourney, and for the other datasets, we use only L1. |