Linking Sketch Patches by Learning Synonymous Proximity for Graphic Sketch Representation
Authors: Sicong Zang, Shikui Tu, Lei Xu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method significantly improves the performance on both controllable sketch synthesis and sketch healing. |
| Researcher Affiliation | Academia | Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China {sczang, tushikui, leixu}@sjtu.edu.cn |
| Pseudocode | No | The paper describes the methodology with equations and textual descriptions but does not include a formally labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | 1The codes are in: https://github.com/CMACH508/SP-gra2seq. |
| Open Datasets | Yes | Three datasets from Quick Draw (Ha and Eck 2018) are selected for experimental comparison. |
| Dataset Splits | Yes | Each category contains 70K training, 2.5K valid and 2.5K test samples (1K = 1000). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as CPU or GPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' and 'Re LU activation function' but does not specify software dependencies with version numbers (e.g., 'PyTorch 1.9', 'Python 3.8'). |
| Experiment Setup | Yes | When training an SP-gra2seq, the patch number M, the mini-batch size N, the learning rate η for updating clustering centroids and the weight λ in the objective are fixed at 20, 256, 0.05 and 0.25, respectively. The numbers of cluster centroids K are 30, 50 and 50 for three datasets, respectively. We employ Adam optimizer for the network learning with the parameters β1 = 0.9, β2 = 0.999 and ϵ = 10 8. And the learning rate starts from 10 3 with a decay rate of 0.95 for each training epoch. |