ND-MRM: Neuronal Diversity Inspired Multisensory Recognition Model

Authors: Qixin Wang, Chaoqiong Fan, Tianyuan Jia, Yuyang Han, Xia Wu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the performance of the proposed ND-MRM, we employ a multisensory emotion recognition task as a case study. The results demonstrate that our model surpasses state-of-the-art brain-inspired baselines on two datasets, proving the potential of brain-inspired methods for advancing multisensory interaction and recognition.
Researcher Affiliation Academia School of Artificial Intelligence, Beijing Normal University, Beijing, China {qxwang, fcq, tianyj, yuyang han}@mail.bnu.edu.cn, wuxia@bnu.edu.cn
Pseudocode No The paper describes the model and its equations but does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets Yes Our model is evaluated using two datasets. The first dataset is e NTERFACE 05 (Martin et al. 2006)... The second dataset utilized is the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) (Livingstone and Russo 2018).
Dataset Splits No The paper refers to using datasets but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) needed to reproduce the data partitioning for training and validation.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup Yes In the convolutional layers of the NDMRM, the number of channels is set to 4, and the kernel size is 5x5. Both Re(v) and Re(v) consist of two fully connected layers (nl = 200) and an output layer with the same number of labels as each dataset. The hyperparameter ρ is initially set to 0.7. The capacitance C is 1µF/cm2, g is 0.2n S, time constant is 1ms, resting potential V1 is equal to reset potential V2 with 0m V . The firing threshold is 0.5m V in the beginning. For the adaptive threshold, we set α=0.9, β=0.1, and γ=1.