Sparse similarity-preserving hashing
Authors: Alex M. Bronstein; Pablo Sprechmann; Michael M. Bronstein; Jonathan Masci; Guillermo Sapiro
ICLR 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experimental results We compare Sparse Hash to several state-of-the-art supervised and semi-supervised hashing methods: DH (Strecha et al., 2012), SSH (Shakhnarovich et al., 2003), AGH (Liu et al., 2011), KSH (Liu et al., 2012), and NNhash (Masci et al., 2011), using codes provided by the authors. For Sparse Hash, we use fully online training via stochastic gradient descent with annealed learning rate and momentum, fixing the maximum number of epochs to 250. A single layer ISTA net is used in all experiments. |
| Researcher Affiliation | Academia | Jonathan Masci jonathan@idsia.ch Alex M. Bronstein bron@eng.tau.ac.il Michael M. Bronstein michael.bronstein@usi.ch Pablo Sprechmann pablo.sprechmann@duke.edu Guillermo Sapiro guillermo.sapiro@duke.edu |
| Pseudocode | No | The paper includes a schematic diagram of a network (Figure 1) but no pseudocode or clearly labeled algorithm block. |
| Open Source Code | No | The paper mentions 'using codes provided by the authors' for *other* methods, but does not state that the code for the proposed Sparse Hash method is open source or publicly available. |
| Open Datasets | Yes | CIFAR10 (Krizhevsky, 2009) is a standard set of 60K labeled images... NUS (Chua et al., 2009) is a dataset containing 270K annotated images from Flickr. |
| Dataset Splits | Yes | Following (Liu et al., 2012), we used a training set of 200 images for each class; for testing, we used a disjoint query set of 100 images per class and the remaining 59K images as database. Testing was done on a query set of 100 images per concept; training was performed on 100K pairs of images. |
| Hardware Specification | No | The paper does not specify any particular hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions 'standard NN learning techniques' and 'stochastic gradient descent' but does not provide specific software names with version numbers for reproducibility. |
| Experiment Setup | Yes | For Sparse Hash, we use fully online training via stochastic gradient descent with annealed learning rate and momentum, fixing the maximum number of epochs to 250. |