Learning with Holographic Reduced Representations

Authors: Ashwinkumar Ganesan, Hang Gao, Sunil Gandhi, Edward Raff, Tim Oates, James Holt, Mark McLean

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments and ablations will focus on the impact of replacing a simple output layer with HRRs and other changes to the HRR approach to better understand its behaviors. We will now conduct an experimental evaluation of our new HRR based approach to XML classification. Table 1 shows the performance of HRR approach to its respective baselines, evaluated at k = 1 for brevity.
Researcher Affiliation Collaboration 1University of Maryland, Baltimore County 2Laboratory for Physical Sciences 3Booz Allen Hamilton
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes Our code can be found at https://github.com/Neuromorphic Computation Research Program/ Learning-with-Holographic-Reduced-Representations
Open Datasets Yes To assess the impact of dense label representations, we use eight of the datasets from Bhatia et al. [44]. [44] K. Bhatia, K. Dahiya, H. Jain, A. Mittal, Y. Prabhu, and M. Varma, The extreme classification repository: Multi-label datasets and code, 2016. [Online]. Available: http://manikvarma.org/downloads/XC/XMLRepository.html
Dataset Splits No The paper mentions that 'Bhatia et al. [44] split the dataset into small scale and large datasets' and that they used these datasets, but it does not provide specific percentages or counts for their own train, validation, and test splits used for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware used for running the experiments (e.g., CPU/GPU models, memory).
Software Dependencies No The paper mentions 'Py Torch' as a framework but does not specify any version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes The baseline multi-label network is a fully-connected (FC) network with two hidden layers. Both hidden layers have the same size of 512 with a Re LU activation. The basline network has L outputs trained with binary cross entropy (BCE) loss with appropriate sigmoid activation. For a CNN based model we will use the XML-CNN approach of [35]. Their original architecture with their code is used as the baseline.