Color Visual Illusions: A Statistics-based Computational Model
Authors: Elad Hirsch, Ayellet Tal
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We introduce a tool that computes the likelihood of patches, given a large dataset to learn from. ... This paper provides a general, data-driven explanation to these (and other) illusions. It also presents a model that generates illusions by modifying given natural images in accordance with the above explanation... The network was trained on random patches, sampled from Places [55]... |
| Researcher Affiliation | Academia | Elad Hirsch and Ayellet Tal Technion Israel Institute of Technology {eladhirsch@campus,ayellet@ee}.technion.ac.il |
| Pseudocode | No | The paper describes the framework and methods using text and a flow diagram (Figure 2), but does not contain a formally structured pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is available at https://github.com/eladhi/VI-Glow |
| Open Datasets | Yes | The network was trained on random patches, sampled from Places [55], which is a large scene dataset. |
| Dataset Splits | No | The paper mentions training on the Places dataset and evaluating the model, but it does not specify any training/validation/test dataset splits (percentages or counts) or reference predefined splits that include a validation set. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running the experiments were mentioned in the paper. |
| Software Dependencies | No | The paper states 'We base our framework on Glow [24]' but does not provide specific version numbers for any software dependencies (e.g., deep learning frameworks like PyTorch or TensorFlow, or programming language versions). |
| Experiment Setup | No | The paper mentions that the architecture uses 'a single flow and K = 32 composed transformations' and that the 'input consists of image patches of size 16x16'. However, it does not provide specific hyperparameter values (e.g., learning rate, batch size, optimizer details, number of epochs) or other system-level training settings needed for full reproducibility of the experimental setup. |