Learning Fractals by Gradient Descent
Authors: Cheng-Hao Tu, Hong-You Chen, David Carlyn, Wei-Lun Chao
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct two experiments to validate our approach. |
| Researcher Affiliation | Academia | Department of Computer Science and Engineering, The Ohio State University {tu.343, chen.9301, carlyn.1, chao.209}@osu.edu |
| Pseudocode | Yes | Algorithm 1: IFS generation process. See section 6 for details. Algorithm 2: IFS generation process via the FE layer. See subsection 6 for details. |
| Open Source Code | Yes | The code is provided at https://github.com/andytu28/Learning Fractals. |
| Open Datasets | Yes | We first reconstruct random fractal images generated following Fractal DB (Kataoka et al. 2020; Anderson and Farrell 2022). We then consider images that are not generated by fractals, including MNIST (hand-written digits) (Le Cun et al. 1998), FMNIST (fashion clothing) (Xiao, Rasul, and Vollgraf 2017), and KMNIST (hand-written characters) (Clanuwat et al. 2018). |
| Dataset Splits | No | For the image reconstruction task: 'As we would recover any given target, no training and test splits are considered.' For the GAN extension, it mentions 'training/test splits' but does not specify a separate validation set or its characteristics. |
| Hardware Specification | No | The paper mentions 'computational resources by the Ohio Supercomputer Center and AWS Cloud Credits for Research' but does not specify particular GPU or CPU models, or exact hardware configurations. |
| Software Dependencies | No | The paper mentions 'deep learning frameworks like Py Torch (Paszke et al. 2019) and Tensor Flow (Abadi et al. 2016)' but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Each S is learned with a batch size 50 and a learning rate 0.05 for 1K SGD steps. [...] we set τ = 1 (RBF kernel bandwidth) and apply an Adam optimizer (Kingma and Ba 2015). The number of transformations in S is fixed to N = 10, and we set T = 300 per image. |