|
|
Application of LMD-UNET Network in Multi-Modal MRI Images Segmentation of Brain Tumors |
Xia Jingming1#, Tan Ling2*, Liang Ying2 |
1(School of Artificial Intelligence, Nanjing University of Information Science & Technology, Nanjing 210044, China) 2(Engineering Research Center of Digital Forensics Ministry of Education, Nanjing University of Information Science & Technology, Nanjing 210044, China) |
|
|
Abstract There is a semantic gap among the feature maps corresponding to the codec in the UNet network, and its dual roll integration layer cannot learn multi-scale information, resulting in the loss of some feature information, which affects the MRI image segmentation effect. To solve this problem, this paper proposed a new image segmentation network local residual fusion multi-scale dual branch network LMD-UNet. In the coding process, the network used local feature residuals to fuse dense blocks and multi-scale convolution modules to expand the receptive field of images and optimize the propagation of underlying visual features; and in the decoding process, the network used double branch convolution to generate new high-level semantic features to reconstruct the information lost in the coding path. For segmentation experiments, 335 cases of the public brain tumor dataset BraTs were used, and the segmentation results were compared with U-Net that is currently a mainstream segmentation network. Experimental results showed that the four objective evaluation indexes of LMD-UNet model, precision, dice, 95% HD and recall reached0.933, 0.921, 0.702 and 0.966 respectively. Compared to U-Net, the corresponding indicators increased by 6.3%, 5.7%, 1.8%, and 6.1%, respectively, which indicated that LMD-UNet achieved more precise segmentation of brain tumor images. Meanwhile, the proposed method also showed a good performance in the edge contour segmentation for the detail part, which prospectively provided guarantee for the diagnosis of brain tumor and the surgery.
|
Received: 06 June 2022
|
|
Corresponding Authors:
*E-mail:cillatan0@nuist.edu.cn
|
|
|
|
[1] 万俊, 聂生东, 王远军. 基于 MRI的脑肿瘤分割技术研究进展[J].中国医学物理学杂志, 2013, 30(4): 4266-4271. [2] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(4): 640-651. [3] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation[C]//Medical Image Computing and Computer-Assisted Intervention-MICCAI2015. Munich: Springer,2015: 234-241. [4] Hao D, Yang G, Liu F, et al. Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks[C]//Annual Conference on Medical Image Understanding and Analysis. Edinburgh: Springer ,2017: 506-517. [5] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas: IEEE, 2016: 770-778. [6] Kermi A, Mahmoudi I, Khadir MT. Deep convolutional neural networks using U-Net for automatic brain tumor segmentation in multimodal mri volumes[C]// Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries. Granada: Springer, 2018: 37-48. [7] Huang G, Liu Z, Laurens V, et al. Densely connected convolutional networks[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hawaii: IEEE, 2017: 2261-2269. [8] Jégou S, Drozdzal M, Vazquez D, et al. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). Hawaii: IEEE, 2017: 1175-1183. [9] Zhang Zhengxin, Liu Qingjie, Wang Yunhong. Road extraction by deep residual U-Net[J]. IEEE Geoscience and Remote Sensing Letters, 2017(99): 1-5. [10] 杨振, 邸拴虎, 赵于前, 等. 基于级联Dense-UNet和图割的肝脏肿瘤自动分割[J]. 电子与信息学报, 2022, 44(5): 1683-1693. [11] 韩阳, 宋金淼, 薛安懿, 等. 基于三重注意力的脑肿瘤图像分割网络[J]. 中国生物医学工程学报, 2022, 41(1): 57-63. [12] 沈志强, 林超男, 潘林, 等. 基于同构化改进的U-Net结直肠息肉分割方法[J]. 中国生物医学工程学报, 2022, 41(1): 48-56. [13] 邓仕俊,汤红忠,曾黎,等.基于多尺度特征感知的胸腔图像危及器官分割[J].中国生物医学工程学报,2021,40(6):701-711. [14] 杨斌斌, 刘霖雯, 张唯唯. 基于DenseMedic网络的脑皮层下结构的语义分割[J]. 中国生物医学工程学报, 2020, 39(6): 652-666. [15] Ding Y, Gong L, Zhang M, et al. A multi-path adaptive fusion network for multimodal brain tumor segmentation[J]. Neurocomputing, 2020,41(28): 19-30. [16] Rhee JS. Certain radially dilated convolution and its application[J]. honam mathematical journal, 2010, 32(1): 101-112. [17] Dale BM, Brown MA, Semelka RC. MRI basic principles and applications[J]. Medical Physics, 2003, 31(1): 170-170. [18] Montminy DP, Baldwin RO, Temple MA, et al. Improving cross-device attacks using zero-mean unit-variance normalization[J]. Journal of Cryptographic Engineering, 2013, 3(2): 99-110. [19] Upadhaya T, Morvan Y, Stindel E, et al. Prognostic value of multimodal MRI tumor features in glioblastoma multiforme using textural features analysis[C]//2015 IEEE 12th International Symposium on Biomedical Imaging(ISBI). New York: IEEE,2015: 50-54. [20] Thada V, Jaglan V. Comparison of Jaccard, Dice, Cosine similarity coefficient to find best fitness value for web retrieved documents using genetic algorithm[J]. International Journal of Innovations in Engineering and Technology, 2013, 2(4): 202-205. [21] Rezatofighi H, Tsoi N, Gwak JY, et al. Generalized intersection over union: a metric and a loss for bounding box regression[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Long Beach: IEEE, 2019: 658-666. [22] Kaku A, Hegde CV, Huang J, et al. DARTS: DenseUnet-based automatic rapid tool for brain segmentation [EB/OL]. https://arxiv.org/abs/1911.05567v2, 2019-11-14/2022-06-06. |
[1] |
Su Jiahao, She Qingshan, Zhang Jianhai, Ma Yuliang, Fan Yingle. Cortico-Muscular Coupling Analysis Based on Cumulative Spike Sequence of Motor Unit[J]. Chinese Journal of Biomedical Engineering, 2023, 42(4): 385-393. |
[2] |
Cai Hongwei, Luo Zhizeng, Shi Hongfei. Effects of Transcranial Direct Current Stimulation on Features of Human Balance Brain Network[J]. Chinese Journal of Biomedical Engineering, 2023, 42(4): 394-402. |
|
|
|
|