Understanding Hyperdimensional Computing for Parallel Single-Pass Learning

Authors: Tao Yu, Yichi Zhang, Zhiru Zhang, Christopher M. De Sa

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that our RFF method and group VSA can both outperform the state-of-the-art HDC model by up to 7.6% while maintaining hardware efficiency.
Researcher Affiliation Academia Tao Yu Cornell University tyu@cs.cornell.edu Yichi Zhang Cornell University yz2499@cs.cornell.edu Zhiru Zhang Cornell University zhiruz@cs.cornell.edu Christopher De Sa Cornell University cdesa@cs.cornell.edu
Pseudocode Yes Algorithm 1 Construct correlated hypervectors input: similarity matrix M Rn n, dimension d let ˆΣ = sin( π 2 M) {elementwise} let UΛU T = ˆΣ {symmetric eigendecomposition} sample X Rn d iid unit Gaussians return sgn(UΛ1/2 + X) {elementwise}
Open Source Code Yes Our code is available on github https://github.com/Cornell-Relax ML/Hyperdimensional-Computing.
Open Datasets Yes We evaluate the performance of proposed methods on two conventional HDC datasets, ISOLET [Dua and Graff, 2017] and UCIHAR [Anguita et al., 2012]. We also evaluate our method on MNIST and Fashion-MNIST [Xiao et al., 2017]. The datasplit is default to each dataset.
Dataset Splits Yes Datasets. We evaluate the performance of proposed methods on two conventional HDC datasets, ISOLET [Dua and Graff, 2017] and UCIHAR [Anguita et al., 2012]. We also evaluate our method on MNIST and Fashion-MNIST [Xiao et al., 2017]. The datasplit is default to each dataset.
Hardware Specification Yes We train on Intel Xeon CPUs.
Software Dependencies No The paper does not provide specific software names with version numbers for dependencies.
Experiment Setup Yes Setups. For ISOLET and UCIHAR, we quantize the features to 8 bits before encoding. We initialize a 10,000-dimensional basis hypervector for each {0, , 255} feature value, then encode raw inputs as described in Section 5 or 6. During the training stage, we use a learning rate of 0.01 and train classifiers for 10 epochs.