Self-Supervised Learning for Enhancing Spatial Awareness in Free-Hand Sketches
Authors: Xin Wang, Tengjie Li, Sicong Zang, Shikui Tu, Lei Xu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental results demonstrate that our model outperforms existing methods in both our proposed task and the traditional controllable sketch synthesis task. Additionally, we found that Sketch Gloc can learn more robust representations under our proposed task setting. We selected controllable sketch synthesis [Zang et al., 2021] and the Sketch Reorganization task proposed by us to validate whether Sketch Gloc has learned accurate and robust graphic sketch representations. |
| Researcher Affiliation | Academia | Xin Wang1 , Tengjie Li1 , Sicong Zang1,2 , Shikui Tu1 and Lei Xu1,3 1Dept. of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China 2School of Computer Science and Technology, Donghua University, Shanghai 201620, China 3Guangdong Institute of Intelligence Science and Technology, Zhuhai, Guangdong 519031, China |
| Pseudocode | Yes | Algorithm 1: Disturbing sketch strokes Data: S = s1, s2, ..., s M, scale Result: S 1 Initialization:i 0 2 Calculate the size of the sketch S // add noise 3 while i = n do 4 S[i,0], S[i,1] xabs + S[i,0], yabs + S[i,1] // Adding noise to each of the strokes 5 if i = 0 then 6 ϵx, ϵy G(0, 1), G(0, 1) 7 Add noise ϵx size scale, ϵy size scale to si 8 end |
| Open Source Code | Yes | The source code is available at https://github.com/CMACH508/Sketch Gloc. |
| Open Datasets | Yes | We evaluated Sketch Gloc on Quick Draw [Ha and Eck, 2018], a large vector sketch dataset containing tens of millions of human free-hand sketches across 345 classes. To account for variability between different sketch classes, we utilized three datasets proposed by [Zang et al., 2021] in our experiments. DS1 and DS2 were taken from [Zang et al., 2021]... DS3 [Qi et al., 2021] includes car, cat and horse additionally to DS1, constituting a more challenging data set. |
| Dataset Splits | Yes | For each category, we used 70,000 sketches for training, 2,500 for validation, and 2,500 for testing. |
| Hardware Specification | No | The paper mentions that "Additional experimental details are provided in the appendix" but the provided text does not contain any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. |
| Software Dependencies | No | The paper states, "The Adam optimizer was employed for learning with parameters β1 = 0.9 and β2 = 0.999." However, it does not specify any software versions (e.g., Python, PyTorch, TensorFlow versions or other libraries) that would be needed to replicate the experiment. |
| Experiment Setup | Yes | We set the number of strokes M, and the batch size N, to 50 and 128, respectively. The Adam optimizer was employed for learning with parameters β1 = 0.9 and β2 = 0.999. Additional experimental details are provided in the appendix. |