A Grassmannian Manifold Self-Attention Network for Signal Classification

Authors: Rui Wang, Chen Hu, Ziheng Chen, Xiao-Jun Wu, Xiaoning Song

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on three benchmarking datasets (a drone recognition dataset and two EEG signal classification datasets) demonstrate the superiority of our method over the state-of-the-art.
Researcher Affiliation Academia Rui Wang1,2 , Chen Hu1,2 , Ziheng Chen3 , Xiao-Jun Wu1,2 and Xiaoning Song1,2 1School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, China 2Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi, China 3Department of Information Engineering and Computer Science, University of Trento, Trento, Italy
Pseudocode Yes Algorithm 1 Grassmannian Manifold Self-Attention Module Input: A sequence of Grassmannian data {Yr}m r=1 Parameter: The projection matrices: Wq, Wk, Wv Output: A sequence of Grassmannian data Y r m r=1 1: for r 1 to m do 2: Q r = form(Qr) = form(Wq Yr) 3: K r = form(Kr) = form(Wk Yr) 4: V r = form(Vr) = form(Wv Yr) 5: end for 6: r, j {1, 2, , m}: A := D rj m m = 1 1+log(1+d2 PM(Q r,K j)) 7: ˆ A := Softmax(A) = D rj m m 8: for r 1 to m do 9: V r = Pm j=1 D rj (V j V j T) 10: Y r = freo(V r) = Z1:q 11: end for
Open Source Code Yes The code and supplementary material for this work can be found at https://github.com/Chen Hu-ML/GDLNet.
Open Datasets Yes In this section, we assess the performance of the proposed GDLNet in two distinct classification tasks: drone recognition using the RADAR dataset [Brooks et al., 2019] and EEG signal classification employing the MAMEMSSVEP-II dataset [Pan et al., 2022] and the BCI-ERN dataset [Margaux et al., 2012], respectively.
Dataset Splits Yes For a fair comparison, we follow [Brooks et al., 2019] to designate 50%, 25%, and 25% of the obtained 3,000 orthonormal matrices to the training set, validation set, and test set, respectively. To be specific, the initial four sessions of each subject serve as the training set, in which one out of four (i.e., session 4) is used for validation, and the remaining session 5 is allotted for testing.
Hardware Specification Yes Besides, our model is trained on a PC equipped with an i7-13700H CPU and 32GB of RAM.
Software Dependencies No The paper mentions training on a PC but does not provide specific software dependencies with version numbers, such as deep learning frameworks or libraries.
Experiment Setup Yes Besides, the learning rate of GDLNet is configured as 5e 3, and the batch size is fixed to 50. The maximum number of training epochs of the proposed GDLNet is set to 150 and 130 on the SSVEP and ERN datasets, respectively. On the SSVEP dataset, the size of each transformation matrix in the GMT layer, learning rate of GDLNet, and batch size are respectively configured as 19 21, 5e 3, and 64, while those on the ERN dataset are set to 12 15, 4e 2, and 30, respectively.