EEG Emotion Recognition Based on Source-Free Domain Adaptation
Zhao Hongyu1, Li Chang1*, Liu Yu2#, Cheng Juan2#, Song Rencheng2, Chen Xun3#
1(School of Instrument Science and Opto-electronics Engineering, Hefei University of Technology, Hefei 230009, China) 2(Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China) 3(Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China)
Abstract:Existing domain adaptation methods in EEG emotion recognition utilize source domain data and feature distribution to train the model, which inevitably requires frequent access to the source domain and thus may lead to leakage of private information of the source domain subjects. To address this problem, this paper proposed a source-free domain adaptation EEG emotion recognition method based on the Gaussian mixture model, nuclear-norm maximization, and Tsallis entropy (GNTSFDA). First, based on the source domain data and the CNN andtransformer feature mixture (CTFM) network, the source domain model was trained to obtain the source domain model using the cross-entropy loss. Then, the pseudo-labels of the target domain data were generated by clustering with the Gaussian mixture model to construct the classification loss. Finally, based on the pseudo-labels and the classification loss, the source domain model was re-trained on the target domain data to update its parameters to obtain the target domain model, and the nuclear-norm maximization loss was also utilized during the training process to enhance the class discriminative property and the diversity of the model predictions, and Tsallis entropy loss was utilized to reduce the model predictions' uncertainty. The GNTSFDA method was experimented on the SEED (14 subjects in the source domain, 1 subject in the target domain), SEED-IV (14 subjects in the source domain, 1 subject in the target domain), and DEAP (31 subjects in the source domain, 1 subject in the target domain) public datasets, using a leave-one-subject cross-validation experimental paradigm. The results showed that on the three datasets, the accuracies of emotion recognition of the target domain model was 80.20%, 61.20%, and 58.89%, respectively, which was an improvement of 8.98%, 7.72%, and 6.54%, respectively, compared with thatobtained from the source domain model. The GNTSFDA method only needs to access the source domain model parameters, instead of the source domain, therefore, effectively protected the privacy information of source domain subjects and is of great significance in the practical application of EEG-based emotion recognition.
[1] 聂聃,王晓韡,段若男,等. 基于脑电的情绪识别研究综述[J]. 中国生物医学工程学报, 2012, 31(4): 595-606. [2] Lindquist KA, Barrett LF. A functional architecture of the human brain: emerging insights from the science of emotion[J]. Trends in Cognitive Sciences, 2012, 16(11): 533-540. [3] Tao Wei, Li Chang, Song Rencheng, et al. EEG-based emotion recognition via channel-wise attention and self attention[J]. IEEE Transactions on Affective Computing, 2023, 14(1):382-393. [4] 凌文芬. 基于深度学习的生理信号情绪识别研究 [D]. 杭州:杭州电子科技大学,2021. [5] Li Chang, Lin Xuejuan, Liu Yu, et al. EEG-based emotion recognition via efficient convolutional neural network and contrastive learning[J]. IEEE Sensors Journal, 2022, 22(20): 19608-19619. [6] Koelstra S, Muhl C, Soleymani M, et al. Deap: A database for emotion analysis; using physiological signals[J]. IEEE transactions on Affective Computing, 2011, 3(1): 18-31. [7] Katsigiannis S, Ramzan N. DREAMER: a database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices[J]. IEEE Journal of Biomedical and Health Informatics, 2017, 22(1): 98-107. [8] Liu Yu, Ding Yufeng, Li Chang, et al. Multi-channel EEG-based emotion recognition via a multi-level features guided capsule network[J]. Computers in Biology and Medicine, 2020, 123:103927. [9] Song Yonghao, Jia Xueyu, Yang Lie, et al. Transformer-based spatial-temporal feature learning for EEG decoding[DB/OL]. https://arxiv.org/abs/2106.11170, 2021-06-11/2023-07-13. [10] Liu J, Wu H, Zhang L, et al. Spatial-temporal transformers for eeg emotion recognition [C] //Proceedings of the 6th International Conference on Advances in Artificial Intelligence. Birmingham: Association for Computing Machinery, 2022: 116-120. [11] Li Chang, Zhang Zhongzhen, Zhang Xiaodong, et al. EEG-based emotion recognition via transformer neural architecture search[J]. IEEE Transactions on Industrial Informatics, 2022, 19(4): 6016-6025. [12] Li Wei, Huan Wei, Hou Bowen, et al. Can emotion be transferred?—a review on transfer learning for EEG-based emotion recognition[J]. IEEE Transactions on Cognitive and Developmental Systems, 2021, 14(3): 833-846. [13] Zhang Xiaowei, Liang Wenbin, Ding Tingzhen, et al. Individual similarity guided transfer modeling for EEG-based emotion recognition [C] //2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). San Diego: IEEE, 2019: 1156-1161. [14] Chai Xin, Wang Qisong, Zhao Yongping, et al. A fast, efficient domain adaptation technique for cross-domain electroencephalography (EEG)-based emotion recognition[J]. Sensors, 2017, 17(5): 1014-1035. [15] Duan Ruonan, Zhu Jiayi, Lu Baoliang. Differential entropy feature for EEG-based emotion classification [C] //2013 6th International IEEE/EMBS Conference on Neural Engineering (NER). San Diego: IEEE, 2013: 81-84. [16] Li Jinpeng, Qiu Shuang, Du Changde, et al. Domain adaptation for EEG emotion recognition based on latent representation similarity[J]. IEEE Transactions on Cognitive and Developmental Systems, 2019, 12(2): 344-353. [17] Cai Ziliang, Wang Lingyue, Guo Miaomiao, et al. From intricacy to conciseness: a progressive transfer strategy for EEG-based cross-subject emotion recognition[J]. International Journal of Neural Systems, 2022, 32(3): 2250005. [18] Chen Hao, Jin Ming, Li Zhunan, et al. MS-MDA: multisource marginal distribution adaptation for cross-subject and cross-session EEG emotion recognition[J]. Frontiers in Neuroscience, 2021, 15: 778488. [19] Zheng Weilong, Liu Wei, Lu Yifei, et al. Emotionmeter: A multimodal framework for recognizing human emotions[J]. IEEE Transactions on Cybernetics, 2018, 49(3): 1110-1122. [20] Yuste R, Goering S, Arcas BA, et al. Four ethical priorities for neurotechnologies and AI[J]. Nature, 2017, 551(7679): 159-163. [21] Zheng Weilong, Lu Baoliang. Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks[J]. IEEE Transactions on Autonomous Mental Development, 2015, 7(3): 162-175. [22] Shi Lichen, Jiao Yingying, Lu Baoliang. Differential entropy feature for EEG-based vigilance estimation [C] //2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Osaka: IEEE, 2013: 6627-6630. [23] Li Jinpeng, Zhang Zhaoxiang, He Huiguang. Hierarchical convolutional neural networks for EEG-based emotion recognition[J]. Cognitive Computation, 2018, 10(2): 368-380. [24] Yang Yilong, Wu Qingfeng, Fu Yazhen, et al. Continuous convolutional neural network with 3D input for EEG-based emotion recognition [C] //Neural Information Processing: 25th International Conference, ICONIP 2018. Siem Reap: Springer International Publishing, 2018: 433-443. [25] Yang Yilong, Wu Qingfeng, Qiu Ming, et al. Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network [C] //2018 International Joint Conference On Neural Networks (IJCNN). Rio de Janeiro: IEEE, 2018: 1-7. [26] Li Xiang, Zhang Yazhou, Tiwari P, et al. EEG based emotion recognition: a tutorial and review[J]. ACM Computing Surveys, 2022, 55(4): 79-136. [27] Song Yonghao, Zheng Qingqing, Liu Bingchuan, et al. EEG conformer: Convolutional transformer for EEG decoding and visualization[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023, 31: 710-719. [28] Wang Zhe, Wang Yongxiong, Hu Chuanfei, et al. Transformers for EEG-based emotion recognition: A hierarchical spatial information learning model[J]. IEEE Sensors Journal, 2022, 22(5): 4359-4368. [29] Gong Linlin, Li Mingyang, Zhang Tao, et al. EEG emotion recognition using attention-based convolutional transformer neural network[J]. Biomedical Signal Processing and Control, 2023, 84: 104835. [30] Islam MA, Kowal M, Jia S, et al. Position, padding and predictions: a deeper look at position information in cnns[DB/OL]. https://arxiv.org/abs/2101.12322, 2021-01-28/2023-07-13. [31] Lee J, Jung D, Yim J, et al. Confidence score for source-free unsupervised domain adaptation [DB/OL]. https://arxiv.org/abs/2206.06640, 2022-06-14/2023-07-13. [32] Kumar V, Lal R, Patil H, et al. Conmix for source-free single and multi-target domain adaptation [C] //Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. Waikoloa: IEEE, 2023: 4167-4177. [33] Recht B, Fazel M, Parrilo PA. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization[J]. SIAM Review, 2010, 52(3): 471-501. [34] Cui Shuhao, Wang Shuhui, Zhuo Junbao, et al. Gradually vanishing bridge for adversarial domain adaptation [C] //Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 12452-12461. [35] Pei Jiangbo, Jiang Zhuqing, Men Aidong, et al. Uncertainty-induced transferability representation for source-free unsupervised domain adaptation[J]. IEEE Transactions on Image Processing, 2023, 32: 2033-2048. [36] Lee JH, Lee G. Feature alignment by uncertainty and self-training for source-free unsupervised domain adaptation[J]. Neural Networks, 2023, 161: 682-692. [37] Tsallis C. Possible generalization of Boltzmann-Gibbs statistics[J]. Journal of Statistical Physics, 1988, 52(1-2): 479-487. [38] Liang Jian, Hu Dapeng, Feng Jiashi. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation[C] //International Conference on Machine Learning. Vienna: PMLR, 2020: 6028-6039. [39] Ding Yuhe, Sheng Lijun, Liang Jian, et al. ProxyMix: proxy-based mixup training with label refinery for source-free domain adaptation[DB/OL]. https://arxiv.org/abs/2205.14566,2022-05-29/2023-07-13. [40] Xia Kun, Deng Lingfei, Duch W, et al. Privacy-preserving domain adaptation for motor imagery-based brain-computer interfaces[J]. IEEE Transactions on Biomedical Engineering, 2022, 69(11): 3365-3376. [41] Zhang Wen, Wang Ziwei, Wu Dongrui. Multi-source decentralized transfer for privacy-preserving BCIs[J]. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022, 30: 2710-2720. [42] Huang Jiaxing, Guan Dayan, Xiao Aoran, et al. Model adaptation: historical contrastive learning for unsupervised domain adaptation without source data[J]. Advances in Neural Information Processing Systems, 2021, 34: 3635-3649. [43] Guo Jianyuan, Han Kai, Wu Han, et al. Cmt: convolutional neural networks meet vision transformers [C] //Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. New Orleans: IEEE, 2022: 12165-12175.