Mid-Vision Feedback
Authors: Michael Maynord, Eadom T Dessalene, Cornelia Fermuller, Yiannis Aloimonos
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Here we cover experiments demonstrating the utility of our feedback method. We perform evaluations over CIFAR100 (Krizhevsky et al., 2009), Image Net (Deng et et al., 2009), and the Caltech UCSD Birds Data set (Birds) (Wah et al., 2011), all with multiple context splits these datasets are |
| Researcher Affiliation | Academia | Michael Maynord, Eadom Dessalene, Cornelia Ferm uller, and Yiannis Aloimonos Department of Computer Science University of Maryland, College Park College Park, MD 20742, USA {maynord,edessale,fermulcm,jyaloimo}@umd.edu |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code will be available at: https://github.com/maynord/Mid-Vision-Feedback |
| Open Datasets | Yes | We perform evaluations over CIFAR100 (Krizhevsky et al., 2009), Image Net (Deng et al., 2009), and the Caltech UCSD Birds Data set (Birds) (Wah et al., 2011), all with multiple context splits |
| Dataset Splits | No | The paper specifies training and testing splits for datasets but does not explicitly mention or detail a separate validation set split. For example, for CIFAR100: "Each class contains exactly 500 training images and 100 testing images"; for Image Net: "we designate 80% of the images in each class for training, but only 2% for testing"; for CUB: "we designate 80% of the dataset for training and 20% for testing." |
| Hardware Specification | No | The paper states: "Each model consumes 1 GPU during train and test time." However, it does not provide specific hardware details such as the model of the GPU, CPU, or other system specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies such as programming languages, deep learning frameworks (e.g., PyTorch, TensorFlow), or other libraries. |
| Experiment Setup | Yes | The hyperparameters to the modifications made to each of the base architectures for feedback incorporation are described briefly in Section H in the Appendix, and described in detail in Section I in the Appendix. ... A learning rate of 0.001 is chosen for the training of the base network during the first stage, and a learning rate of 5 10 5 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 1 10 3. |