Learning A Deep
Authors: ℓ
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate the impressive performances of the proposed model. We also provide an in-depth analysis of its behaviors against the competitors. ... 5 Experiments in Image Hashing |
| Researcher Affiliation | Academia | Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA Department of Automation, University of Science and Technology of China, Hefei, 230027, China |
| Pseudocode | No | The paper presents mathematical equations and a block diagram (Fig. 1), but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | The CIFAR10 dataset [Krizhevsky and Hinton, 2009] contains 60K labeled images of 10 different classes. ... NUS-WIDE [Chua et al., 2009] is a dataset containing 270K annotated images from Flickr. |
| Dataset Splits | No | The paper specifies training and query/test sets, but does not explicitly mention a separate validation set or a specific split for hyperparameter tuning. It states, "We used a training set of 200 images for each class, and a disjoint query set of 100 images per class. The remaining 59K images are treated as database." |
| Hardware Specification | No | The paper mentions implementation with "the CUDA Conv Net package", implying the use of NVIDIA GPUs. However, it does not specify any particular GPU model (e.g., RTX, Tesla, A100), CPU model, or other detailed hardware specifications. |
| Software Dependencies | No | The paper mentions "the CUDA Conv Net package" but does not provide any version number for this or any other software dependency, which is necessary for reproducibility. |
| Experiment Setup | Yes | We use a constant learning rate of 0.01 with no momentum, and a batch size of 128. Different from prior findings such as in [Wang et al., 2016c; 2016b], we discover that untying the values of S1, b1 and S2, b2 boosts the performance more than sharing them. ... K = 2 by default |