|
|
Segmentation of Thoracic Image Organs at Risk Based on Multi-Scale Feature-Aware |
Deng Shijun1, Tang Hongzhong1,2*, Zeng Li1, Zeng Shuying1, Zhang Dongbo1 |
1(College of Automation and Electronic Information, Xiangtan University,Xiangtan 411104,Hunan,China) 2(Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University, Xiangtan 411105,Hunan,China) |
|
|
Abstract Automatic segmentation of organs at risk (OARs) in medical images is an essentialconstituent of computer-aided diagnosis, and it plays a vital role in assisting doctors to completeradiotherapy with high quality and efficiency. There are some challenges in the accuratesegmentation of OARs for thoracic CT images, including low intensity contrast, different organswith interlaced and overlap regions, and different structure without clear boundaries. In this paper,a multi-scale feature-aware encoding-decoding network (FA-Unet) was proposed to segmentOARs in thoracic CT images.To address the problem of the size difference among fourkinds of organs in the thoraciccavity, an input-aware module was designed to extract multi-scale features in four types of organs. In order to bridge the semantic gap between theencoding and decoding layers, the modified inception module was introduced to long-range skipconnections between the encoding part and the decoding part in our architecture. Furthermore,we replaced the traditional serial convolution operation with the efficient spatial pyramid(ESP) andpyramid spatial pooling (PSP) modules to make our network more lightweightand avoide over-fitting caused by insufficient data effectively. We formulated a novel lossfunction by combining Dice coefficient and cross entropy to train our network to resolve theclass imbalance in thoracic CT images. Finally, we evaluated the effectiveness of our model onthe SegTHOR data set released by ISBI in 2019, and the dataset includes 7 390 thoracic CTimages of 40 patients with lung cancer or Hodgkin's lymphoma. Experimental results showedthat Dice coefficient of each organ in thoracic CT image was 0.793 2 of esophagus, 0.935 9of heart, 0.854 9 of trachea and 0.889 0 of aorta. Hausdorff distances was 1.420 7 of esophagus,0.212 4 of heart, 0.627 3 of trachea and 0.887 0 of aorta. Experimental results verified that ourproposed model outperformed other state-of-the-arts on the segmentation results of OARs andachieved very competitive performance on small target organs.
|
Received: 21 September 2020
|
|
|
|
|
[1] Miller KD, Siegel RL, Lin CC, et al. Cancer treatment and survivorship statistics, 2016 [J]. CA: A Cancer Journal for Clinicians, 2016, 66(4): 271-289. [2] Ibragimov B, Xing Lei. Segmentation of organs-at-risk in head and neck CT images using convolutional neural networks [J]. Medical Physics, 2017, 44(2): 547-557. [3] Yu Xiaohan, Yla-Jaaski J. A new algorithm for image segmentation based on region growing and edge detection[C]// Proceeding of the 1991 IEEE International Sympoisum on Circuits and Systems. Singapore: IEEE, 1991: 516-519. [4] 戴相昆,王小深,杜乐辉,等.基于三维U-NET深度卷积神经网络的头颈部危及器官的自动勾画[J].生物医学工程学杂志,2020,37(1):136-141. [5] Trullo R, Petitjean C, Ruan S, et al. Segmentation of organs at risk in thoracic CT images using a sharpmask architecture and conditional random fields [C]// Proceeding of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017). Melbourne:IEEE, 2017: 1003-1006. [6] Feng Xue, Qing Kun, Tustison NJ, et al. Deep convolutional neural network for segmentation of thoracic organs-at-risk using cropped 3D images [J]. Medical Physics, 2019, 46(5): 2169-2180. [7] Rooij WV, Dahele M, Brandao HR, et al. Deep learning-based delineation of head and neck organs at risk: Geometric and dosimetric evaluation [J]. International Journal of Radiation Oncology Biology Physics, 2019, 104(3): 677-684. [8] 田娟秀,刘国才,谷珊珊,等.基于3D深度残差全卷积网络的头颈CT放疗危及器官自动勾画[J].中国生物医学工程学报,2019,38(3):257-265. [9] Kumar D, Wong A, Clausi DA. Lung nodule classification using deep features in CT images [C]// Proceeding of the 2015 12th Conference on Computer and Robot Vision. Halifa: IEEE, 2015: 133-138. [10] Mehta S, Mercan E, Bartlett J, et al. Learning to segment breast biopsy whole slide images [C]// 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). Lake Tahoe:IEEE, 2018: 663-672. [11] Gao Yunhe, Huang Rui, Chen Ming, et al. Focusnet: Imbalanced large and small organ segmentation with an end-to-end deep neural network for head and neck CT images [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2019: 829-838. [12] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation [C]// Proceeding of the 2015 IEEE Conference on Computer Vision and Pattern Recognition, Boston:IEEE, 2015:3431-3440. [13] Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation [C]// International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241. [14] Gali MSK, Garg N, Vasamsetti S. Dilated U-Net based Segmentation of Organs at Risk in Thoracic CT Images[DB/OL]. http://ceur-ws.org/Vol-2349/SegTHOR2019_paper_6, 2019-12-12/2020-09-20. [15] Zhang Jiawei, Jin Yuzhen, Xu Jilan, et al. MDU-net: Multi-scale densely connected U-net for biomedical image segmentation [DB/OL].https://arxiv.org/abs/1812.00352, 2018-12-04/2020-09-20. [16] Zhang Li, Wang Lishen, Huang Yijie, et al. Segmentation of thoracic organs at risk in CT images combining coarse and fine network [DB/OL]. http://ceur-ws.org/Vol-2349/SegTHOR2019_paper_5, 2019-12-12/2020-09-20. [17] 胡玉进,雷柏英,郭力宝,等.基于BiSeNet的小儿超声心动图左心分割方法[J].中国生物医学工程学报,2019,38(5):533-539. [18] Han Miaofei, Zhang Yu, Zhou Qiangqiang, et al. Large-scale evaluation of V-Net for organ segmentation in image guided radiation therapy [C]// Medical Imaging 2019:Image-Guided Procedures, Robotic Interventions, and Modeling. International Society for Optics and Photonics, San Diego: SPIE, 2019, 10951: 109510O. [19] Dong Xue, Yang Lei, Wang Tonghe, et al. Automatic multiorgan segmentation in thorax CT images using U-net-GAN [J]. Medical Physics, 2019, 46(5): 2157-2168. [20] Vu CC, Siddiqui ZA, Zamdborg L, et al. Deep convolutional neural networks for automatic segmentation of thoracic organs‐at‐risk in radiation oncology–use of non‐domain transfer learning [J]. Journal of Applied Clinical Medical Physics, 2020, 21(6): 108-113. [21] Lambert Z, Petitjean C, Dubray B, et al. SegTHOR: Segmentation of Thoracic Organs at Risk in CT images [DB/OL]. https://arxiv.org/abs/1912.05950, 2019-11-12/2020-09-21. [22] Mehta S, Rastegari M, Caspi A, et al. Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation[C]//Proceedings of the European Conference on Computer Vision (ECCV). Munich: IEEE, 2018: 552-568. [23] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition [C]//2018 Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2016: 770-778. [24] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Spatial pyramid pooling in deep convolutional networks for visual recognition[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1904-1916. [25] Szegedy C, Liu Wei, Jia Yangqing, et al. Going deeper with convolutions [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9. [26] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495. |
[1] |
Zhang Suoliang, Wan Lingyan, Zhang Zhiming, Kang Jiannan, Li Xiaoli, Pang Jiao. Study on the Differences of Resting-State EEG Microstate in Children with Autism Spectrum Disorder[J]. Chinese Journal of Biomedical Engineering, 2021, 40(6): 653-661. |
[2] |
Ai Qi, Wang Jun, Ren Fuquan, Weng Wencai, Yu Qiulei. Size-Adaptive Deep Neural Networks Based Pulmonary Nodule Detection in CT Scans[J]. Chinese Journal of Biomedical Engineering, 2021, 40(6): 691-700. |
|
|
|
|