Joint Dictionaries for Zero-Shot Learning
Authors: Soheil Kolouri, Mohammad Rostami, Yuri Owechko, Kyungnam Kim
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments We carried out experiments on three benchmark ZSL datasets and empirically evaluated the resulting performance against nascent ZSL algorithms. |
| Researcher Affiliation | Collaboration | Soheil Kolouri HRL Laboratories, LLC skolouri@hrl.com Mohammad Rostami University of Pennsylvania mrostami@seas.upenn.edu Yuri Owechko HRL Laboratories, LLC yowechko@hrl.com Kyungnam Kim HRL Laboratories, LLC kkim@hrl.com |
| Pseudocode | No | The paper describes algorithms (e.g., Lasso, FISTA, EM-like alternation) but does not provide them in a structured pseudocode or clearly labeled algorithm block format. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Datasets: We conducted our experiments on three benchmark datasets namely: the Animals with Attributes (Aw A1) (Lampert, Nickisch, and Harmeling 2014), the SUN attribute (Patterson and Hays 2012), and the Caltech-UCSDBirds 200-2011 (CUB) bird (Wah et al. 2011) datasets. |
| Dataset Splits | Yes | We used standard k-fold cross validation to search for the optimal parameters for each dataset. After splitting the datasets accordingly into training, validation, and testing sets, we used performance on the validation set for tuning the parameters in a brute-force search. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., specific GPU/CPU models, memory, or cloud instance types). |
| Software Dependencies | No | The paper mentions tools like VGG19, word2vec, and glove but does not provide specific version numbers for any software, libraries, or frameworks used in the implementation. |
| Experiment Setup | No | Tuning parameters: The optimization regularization parameters λ, ρ, γ as well as the number of dictionary atoms r need to be tuned for maximal performance. We used standard k-fold cross validation to search for the optimal parameters for each dataset. While it states *which* parameters are tuned, it does not provide their specific values, ranges, or other concrete training setup details (e.g., learning rate, batch size, optimizer settings). |