网站首页            期刊简介             编委会             投稿指南             期刊订阅             下载中心             在线留言            联系我们             English
  2025年4月25日 星期五  
文章快速检索
中国生物医学工程学报  2022, Vol. 41 Issue (5): 513-526    DOI: 10.3969/j.issn.0258-8021.2022.05.001
  论著 本期目录 | 过刊浏览 | 高级检索 |
基于多模态磁共振图像特征选择的脑胶质瘤分割
成娟1#, 张楚雅1, 刘羽1#*, 李畅1, 朱智勤2, 陈勋3#
1(合肥工业大学生物医学工程系,合肥 230009)
2(重庆邮电大学自动化学院,重庆 400065)
3(中国科学技术大学电子工程与信息科学系,合肥 230026)
Glioma Segmentation Based on Feature Selection of Multi-Modal MR Images
Cheng Juan1#, Zhang Chuya1, Liu Yu1#*, Li Chang1, Zhu Zhiqin2, Chen Xun3#
1(Department of Biomedical Engineering, Hefei University of Technology, Hefei 230009, China)
2(College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China)
3(Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei 230026, China)
全文: PDF (9389 KB)   HTML (1 KB) 
输出: BibTeX | EndNote (RIS)      
摘要 脑胶质瘤分割通常需要将肿瘤区域细分为多个不同性质的子区域,往往需要使用多种不同模态的磁共振(MR)图像。近年来,基于深度学习的脑胶质瘤分割研究已成为主流。然而,大多数基于深度学习的方法只是将不同模态MR图像(或底层特征)进行通道维度堆叠后输入到分割网络中,并且在特征提取阶段忽略不同性质子区域分割时所需模态特征的差异性,导致分割性能不够精良。本研究提出一种基于多模态MR图像特征选择的两阶段分割框架进行脑胶质瘤分割。一方面,设计多模态特征选择模块并嵌入到分割网络框架中,对当前分割任务所需多模态MR图像特征进行自动提取和有效选择;另一方面,将多个不同性质的病变组织子区域分为两阶段分割任务,利用第一阶段分割任务结果提供第二阶段分割目标的定位信息。本方法和对比方法分别在BraTS2018(训练集285个患者,验证集66个患者)、BraTS2019(训练集335个患者,验证集125个患者)和BraTS2020(训练集369个患者,验证集125个患者)公开数据集上进行了实验。在BraTS2018数据集上,本方法在完整肿瘤、肿瘤核心和增强肿瘤区域的Dice相似系数分别为0.898、0.854和0.818,Hausdorff距离分别为4.072、6.179和3.763;在BraTS2019数据集上,本方法在上述3个肿瘤区域的Dice相似系数分别为0.892、0.839和0.800,Hausdorff距离分别为6.168、7.077和3.807;在BraTS2020数据集上,本方法在上述3个肿瘤区域的Dice相似系数分别为0.896、0.837和0.803,Hausdorff距离分别为6.223、7.033和4.411。对比实验结果表明,所提方法在增强肿瘤区域和肿瘤核心区域的分割性能具有明显优势,特别是增强肿瘤区域分割性能在BraTS2020数据集上最佳。基于多模态特征选择模块的两阶段分割框架,针对每阶段分割目标实现了不同模态MR图像特征的自动和充分学习,取得了理想的分割结果,为计算机辅助肿瘤诊断提供了可能的解决方案。
服务
把本文推荐给朋友
加入我的书架
加入引用管理器
E-mail Alert
RSS
作者相关文章
成娟
张楚雅
刘羽
李畅
朱智勤
陈勋
关键词 脑胶质瘤分割多模态磁共振图像特征选择卷积神经网络V-Net    
Abstract:Glioma segmentation based on multi-modal MR images plays a positive role for the diagnosis and treatment of tumors. It is known that different modalities of MR images can provide different properties of information representing pathological tissues. Currently, an increasing number of deep-learning-based glioma segmentation methods have been proposed to segment brain gliomas utilizing multi-modal MR images. However, these methods usually stack the original features derived from multi-modal MR images channel by channel, and roughly take the stacked features as inputs, leading to an inadequate feature mining and an unsatisfactory segmentation performance. To solve this problem, this paper proposed to segment three glioma regions with a two-stage segmentation scheme, with each stage having a feature selection module and a segmentation network. The first stage of the segmentation aimed to segment peripheral edema regions, while the second stage tried to segment necrosis/non-enhancing tumor and enhancing tumor regions. Besides, the first-stage segmentation results provided essential location information that would benefit for segmenting the other two tumor regions during the second stage. For each stage, a multi-modal feature selection module was designed to automatically extract effective and cross-modal-fused features from each modality of MR images, and then these features were sent to each following segmentation network. The segmentation network was composed of a V-Net and a variational autoencoder (VAE). Experiments were conducted on three public brain tumor datasets including BraTS2018, BraTS2019 and BraTS2020. Specifically, for dataset BraTS2018, the average Dice scores of the proposed method for segmenting the whole tumor (WT), the tumor core (TC), and the enhanced tumor (ET) regions reached 0.898, 0.854, and 0.818, respectively, while the Hausdorff95 distance of the proposed method for segmenting the aforementioned three regions reached 4.072, 6.179, and 3.763, respectively. As for dataset BraTS2019, the average Dice scores of the proposed method for segmenting the abovementioned three tumor regions reached 0.892, 0.839, and 0.800, respectively, while the corresponding Hausdorff95 distance of the proposed method can reach to 6.168, 7.077 and 3.807, respectively. As for dataset BraTS2020, the average Dice scores of the proposed method for segmenting the same three regions reached 0.896, 0.837, and 0.803, respectively, while the corresponding Hausdorff95 distance of the proposed method reached 6.223, 7.033, and 4.411, respectively. The results of the comparison experiments demonstrated the obvious superior performance of the proposed method in segmenting ET and TC regions, especially that the performance of ET segmentation was the best in BraTS2020. Owing to the proposed two-stage segmentation scheme, with each having a feature selection module followed by a segmentation network, the potentially cross-modality-fused features could be automatically extracted from each modality of MR images, thus the performance of segmenting the three tumor regions was significantly improved.
Key wordsglioma segmentation    multi-modal magnetic resonance images    feature selection    convolutional neural networks    V-Net
收稿日期: 2022-01-29     
PACS:  R318  
基金资助:国家自然科学基金(62176081,62171176),中央高校基本科研业务费专项资金资助(JZ2021HGPA0061)
通讯作者: * E-mail: yuliu@hfut.edu.cn   
作者简介: #中国生物医学工程学会会员
引用本文:   
成娟, 张楚雅, 刘羽, 李畅, 朱智勤, 陈勋. 基于多模态磁共振图像特征选择的脑胶质瘤分割[J]. 中国生物医学工程学报, 2022, 41(5): 513-526.
Cheng Juan#,Zhang Chuya,Liu Yu#*,Li Chang,Zhu Zhiqin,Chen Xun#. Glioma Segmentation Based on Feature Selection of Multi-Modal MR Images. Chinese Journal of Biomedical Engineering, 2022, 41(5): 513-526.
链接本文:  
http://cjbme.csbme.org/CN/10.3969/j.issn.0258-8021.2022.05.001     或     http://cjbme.csbme.org/CN/Y2022/V41/I5/513
版权所有 © 2015 《中国生物医学工程学报》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发