SignRFF: Sign Random Fourier Features
Authors: Xiaoyun Li, Ping Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In addition, experiments are conducted to compare Sign RFF with a wide range of data-dependent and deep learning based hashing methods and show the advantage of Sign RFF with a sufficient number of hash bits. We conduct experiments to demonstrate the effectiveness of our approach and justify that ranking efficiency indeed provides reliable prediction of the empirical search accuracy. |
| Researcher Affiliation | Industry | Xiaoyun Li, Ping Li Linked In Ads 700 Bellevue WA NE, Bellevue, WA 98004, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states "Did you include the code, data, and instructions needed to reproduce the main experimental results? [Yes] Public datasets." but does not provide a concrete link or explicit instruction on how to access the code. |
| Open Datasets | Yes | We use three popular benchmark datasets for image retrieval. The SIFT dataset [23] contains 1M 128dimensional SIFT image features, and 1000 query samples. The MNIST dataset [29] contains 60000 hand-written digits. The CIFAR dataset [25] contains 50000 natural images and we use the gray-scale images in our experiments. |
| Dataset Splits | No | The paper mentions using a "test set" for queries but does not explicitly provide train/validation/test splits (e.g., percentages or counts) for all datasets used for training models. |
| Hardware Specification | Yes | Our experiments are performed on a single core 2.0GHz CPU. |
| Software Dependencies | No | The paper mentions using "VGG-16" for features but does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments. |
| Experiment Setup | Yes | For methods (5)-(7) involving Gaussian kernel, we tune γ on a fine grid over 0.1-5 and report the best result. |