摘要源自中枢神经系统活动的脑电信号具有不易伪装性而被广泛应用于情感识别领域,但非稳态及微弱等特性导致其存在个体差异性。为适应不同被试之间的数据分布差异,迁移学习被引入脑电情感识别领域。但现有方法一方面未实现域适应与标记估计的有效协同,另一方面仅关注识别精度与数据分布忽略了共享子空间的属性发掘。针对上述问题,本研究提出一种联合双映射域适应与图半监督标签估计的脑电情感状态识别方法。通过在SEED-IV情感数据集进行跨被试情感识别效果验证。该数据集为15名受试者在3个不同时段(Session1, Session2, Session3)播放具有明显情感倾向的影片进行脑电数据采集。结果显示,所提出的方法对SEED-IV中3个时段数据的平均识别精度(77.7%、78.5%、79.6%)均优于现有多种迁移模型,较经典的联合域适应(JDA)方法的平均识别精度有大幅提升(Session2:53.7% vs 78.5%);较新近提出的模型也有最低8.9% (Session2 vs MEKT)的精度提升。此外,通过特征重要性的角度对共享子空间蕴含的脑电情感激活模式进行发掘,并结合频段权重平均结果显示,相较于其他4个频段γ频段具有较高的重要性,并通过单向方差分析验证了与其余4个频段的显著性差异(P<0.05);脑地形图呈现的结果发现,(中央)顶叶脑区权重高于其他脑区。所进行的研究对于脑电情感激活模式的学习分析提供了参考。
Abstract:Electroencephalogram (EEG) has been widely used for objective emotion recognition because it is generated from the neural activities of central nervous system and is hard to camouflage. An obvious limitation is that the weak and non-stationary properties of EEG can cause the individual differences in emotion recognition. To this end, transfer learning models have been introduced to deal with this dilemma. However, the existing models are not able to couple the feature adaptation process with the target label estimation process, and on the other hand, they focused only on the recognition accuracy and have no sufficient investigation to the learned shared subspace. To solve these problems, this paper proposed a joint bi-projection domain adaptation and graph-based semi-supervised label estimation model for EEG emotion recognition (termed RAGE). We evaluated the effectiveness of the proposed RAGE model on the benchmark SEED-IV emotional data set, the data set was collected by playing films with obvious emotional tendencies for 15 subjects at three different Sessions. Results showed that the average recognition accuracies of the three sessions (77.7%、78.5%、79.6%) were much better than many of the existing transfer learning models. Specifically, compared with the classical joint domain adaptation (JDA) method, the average recognition accuracy has been greatly improved (Session2: 53.7% vs. 78.5%). In comparison with the four recently proposed models, RAGE obtained a minimum accuracy improvement of 8.90% (Session2 vs MEKT, manifold embedded knowledge transfer). By investigating the learned common subspace from the feature importance perspective, we achieved more insights to the occurrence of affective effects. That is, the average result showed that the importance of γ was greater than the other four bands, and the significant differences with the other four frequency bands were verified by one-way ANOVA (P<0.05); the brain topographic map showed that the (central) parietal lobe brain region had a higher weight than the other brain regions. Simultaneously, a study was conducted on single class emotional EEG activation patterns using label specific feature learning algorithms. In summary, this research provided a reference for the study and analysis of EEG emotion activation mode.
[1] 聂聃, 王晓韡, 段若男, 等. 基于脑电的情绪识别研究综述[J]. 中国生物医学工程学报, 2012, 31(4): 595-606. [2] Becker H, Fleureau J, Guillotel P, et al. Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources[J]. IEEE Transactions Affect Computing, 2020, 11(2): 244-257. [3] Salovey P, Mayer JD. Emotional intelligence [J]. Imagination, Cognition and Personality, 1990, 9(3): 185-211. [4] 尧德中. 脑机接口:从神奇到现实转变[J]. 中国生物医学工程学报, 2014, 33(6): 641-643. [5] 权学良, 曾志刚, 蒋建华, 等. 基于生理信号的情感计算研究综述[J]. 自动化学报, 2021, 47(8): 1-16. [6] Murugappan M, Rizon M, Nagarajan R, et al. Time-frequency analysis of EEG signals for human emotion detection[C]// International Conference on Biomedical Engineering. Kuala Lumpur: Springer, 2008, 21: 262-265. [7] Li Xiang, Song Dawei, Zhang Peng, et al. Exploring EEG features in cross-subject emotion recognition[J]. Frontiers in Neuroscience, 2018, 12: 162. [8] Zheng Weilong, Lu Baoliang. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks[J]. IEEE Transactions on Autonomous Mental Development, 2015, 7(3):162-175. [9] Zheng Weilong, Zhu Jiayi, Lu Baoliang. Identifying stable patterns over time for emotion recognition from EEG [J]. IEEE Transactions on Affective Computing, 2019, 10(3): 417-429. [10] Gong Boqing, Shi Yuan, Sha Fei, et al. Geodesic flow kernel for unsupervised domain adaptation[C] // IEEE Conference on Computer Vision and Pattern Recognition. Providence: IEEE Computer Society, 2012: 2066-2073. [11] Wang Yixin, Qiu Shuang, Ma Xuelin, et al. A prototype-based SPD matrix network for domain adaptation EEG emotion recognition [J]. Pattern Recognition, 2021, 110:107626. [12] Bahador N, Kortelainen J. Deep learning-based classification of multichannel bio-signals using directedness transfer learning [J]. Biomedical Signal Processing and Control, 2022, 72:103300. [13] Niu Shuteng, Liu Yongxin, Wang Jian, et al. A decade survey of transfer learning (2010-2020) [J]. IEEE Transactions on Artificial Intelligence, 2021, 1(2): 151-166. [14] Zhuang Fuzhen, Qi Zhiyuan, Duan Keyu, et al. A comprehensive survey on transfer learning[J]. Proceedings of the IEEE, 2021, 109(1): 43-76. [15] Li Jinpeng, Qiu Shuang, Du Chengde, et al. Domain adaptation for EEG emotion recognition based on latent representation similarity [J]. IEEE Transactions on Cognitive and Developmental Systems, 2020, 12(2): 344-353. [16] Lan Zirui, Sourina O, Wang Lipo, et al. Domain adaptation techniques for EEG-based emotion recognition: a comparative study on two public datasets [J]. IEEE Transactions on Cognitive and Developmental Systems, 2019, 11(1): 85-94. [17] Luo Junhai, Wu Man, Wang Zhiyan, et al. Progressive low-rank subspace alignment based on semi-supervised joint domain adaption for personalized emotion recognition [J]. Neurocomputing, 2021, 456: 312-326. [18] Ding Zhengming, Li Sheng, Shao Ming, et al. Graph adaptive knowledge transfer for unsupervised domain adaptation[C]// European Conference on Computer Vision. Munich: Springer, Cham, 2018: 37-52. [19] Gretton A, Spriperumbudur B, Sejdinovic D, et al. Optimal kernel choice for large-scale two-sample tests[C]// International Conference on Neural Information Processing Systems, Lake Tahoe: Curran Associates Inc., 2012, 1205-1213. [20] Zhu Xiaojin, Ghahramani Z, Lafferty J D. Semi-supervised learning using gaussian fields and harmonic functions[C]// International Conference on International Conference on Machine Learning. Washington: AAAI Press, 2003: 912-919. [21] Cai Deng, He Xiaofei, Han Jiawei, et al. Graph regularized nonnegative matrix factorization for data representation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 33(8): 1548-1560. [22] Nie Feiping, Wang xiaoqian, Huang Heng. Clustering and projected clustering with adaptive neighbors [C] // ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: Association for Computing Machinery, 2014: 977-986. [23] Nie Feiping, Wang Xiaoqian, Michael I Jordan, et al. The constrained Laplacian rank algorithm for graph-based clustering [C] // AAAI Conference on Artificial Intelligence. Phoenix: AAAI Press, 2016: 1969-1976. [24] Gutman I, Martins EA, Robbiano M, et al. Ky Fan theorem applied to Randić energy [J]. Linear Algebra and Its Applications, 2014, 459(3): 23-42. [25] Peng Yong, Zhu Xin, Nie Feiping, et al. Fuzzy graph clustering[J]. Inform Sciences, 2021, 571(3): 38-49. [26] Peng Yong, Qin Feiwei, Kong Wanzeng, et al. GFIL: a unified framework for the importance analysis of features, frequency bands and channels in EEG-based emotion recognition[J]. IEEE Transactions on Cognitive and Developmental Systems, 2022, 14(3): 935-947. [27] Nie Feiping, Huang Heng, Cai Xiao. Efficient and robust feature selection via joint ℓ2,1-norms minimization [C] // International Conference on Neural Information Processing Systems. Vancouver: Curran Associates Inc., 2010: 1813-1821. [28] Huang Jun, Li Guorong, Huang Qinming, et al. Learning label specific features for multi-label classification [C] // IEEE International Conference on Data Mining. Atlantic City: IEEE Computer Society, 2015: 181-190. [29] Zheng Weilong, Liu Wei, Lu Yifei, et al. EmotionMeter: A multimodal framework for recognizing human emotions[J]. IEEE Transactions on Cybernetics, 2019, 49(3): 1110-1122. [30] Long Mingsheng, Wang Jianmin, Ding Guiguang, et al. Transfer feature learning with joint distribution adaptation [C] // IEEE International Conference on Computer Vision. Sydney: IEEE Computer Society, 2013: 2200-2207. [31] Ke Yan, Lu Kou, Zhang David. Learning domain-invariant subspace using domain features and independence maximization[J]. IEEE Transactions on Cybernetics, 2018, 48(1): 288-299. [32] Song Peng, Zheng Wenming. Feature selection based transfer subspace learning for speech emotion recognition [J]. IEEE Transactions on Affective Computing, 2020, 11(3): 373-382. [33] Zhang Wen, Wu Dongrui. Manifold embedded knowledge transfer for brain-computer interfaces [J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2020, 28(5): 1117-1127. [34] Cui Jin, Jin Xuanyu, Hu Hua, et al. Dynamic distribution alignment with dual-subspace mapping for cross-subject driver mental state detection [J]. IEEE Transactions on Cognitive and Developmental Systems, 22 Dec, 2021 [Epub ahead of print]. [35] Li Jinpeng, Qiu Shuang, Shen Yuanyuan, et al. Multisource transfer learning for cross-subject EEG emotion recognition [J]. IEEE Transactions on Cybernetics, 2020, 50(7): 3281-3293. [36] Van der Maaten L, Hinton G. Visualizing data using t-SNE[J]. Journal of Machine Learning Research, 2008, 9(11): 2579-2605. [37] Pan, SJ, Tsang, IW, Kwok, JT, et al. Domain adaptation via transfer component analysis[J]. IEEE Transaction Neural Networks, 2011, 22(2):199-210.