|
|
Semantic Segmentation of Subcortical Brain Structures Based on DenseMedic Network |
Yang Binbin, Liu Linwen, Zhang Weiwei* |
(State Key Laboratory of Medical Molecular Biology, Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100005, China) |
|
|
Abstract Subcortical segmentation is the basis for computer-aided diagnosis and treatment of central nervous system diseases. By segmenting and analyzing the brain structures in MRI image, early diagnosis and treatment of diseases such as autism spectrum disorder, stroke, and brain tumors can be performed. In order to solve the problem of accurate subcortical segmentation, based on the basic theory of deep learning, an algorithm named DenseMedic for subcortical segmentation on MRI image is proposed. First, the OreoDown method increases the growth rate of the characteristic receptive field by increasing the stride of convolutions in early layers, and uses convolutions with constant input and output sizes to restore the network depth in a sandwich-like manner, so that the increase in growth rate brings an effective receptive field increase. Second, DenseMedic uses the idea of DenseNet to instantiate the OreoDown framework. Multi-scale context information is obtained through densely connected feature extracting operations. Finally, hybrid dilated convolution is utilized in each layer to further expand the receptive field and solve the problem of rough feature extraction. Four metrics namely Dice similarity coefficient (DSC), Intersection over Union (IoU), 95% Hausdorff surface distance (HSD95) and the average surface distance (ASD) were used to evaluate the segmenting performance of the neural networks. Experiments perform on the public IBSR dataset (18 subjects of images), in which DenseMedic reached 89.2%, 80.7%, 1.982 and 0.882 respectively in 4 metrics; experiments perform on the public MRBrainS18 dataset (7 subjects of images), in which DenseMedic reached 88.7%, 79.8%, 1.249 and 0.570 respectively in 4 metrics. The experimental results show that the segmented subcortical structures and corresponding ground truths have more overlaps in regions and more similarities in outlines, which indicates that DenseMedic can effectively accomplish the segmentation of major subcortical structures. In clinical applications, the presented DenseMedic will help to accurately measure the key indicators for the central nervous system related diseases and provide rapid computer-aided diagnosis and treatment.
|
Received: 16 April 2020
|
|
|
|
|
[1] Langen M, Schnack HG, Nederveen H, et al. Changes in the developmental trajectories of striatum in autism[J]. Biological Psychiatry, 2009, 66(4): 327-333. [2] Zhang R, Zhao L, Lou W, et al. Automatic segmentation of acute ischemic stroke from DWI using 3-D fully convolutional DenseNets[J]. IEEE Transactions on Medical Imaging, 2018, 37(9): 2149-2160. [3] Kamnitsas K, Ledig C, Newcombe VFJ, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation[J]. Medical Image Analysis, 2017, 36: 61-78. [4] L?tj?nen JMP, Wolz R, Koikkalainen JR, et al. Fast and robust multi-atlas segmentation of brain magnetic resonance images[J]. Neuroimage, 2010, 49(3): 2352-2365. [5] Wang H, Suh JW, Das SR, et al. Multi-atlas segmentation with joint label fusion[J]. IEEE Transactions on Pattern Analysis And Machine Intelligence, 2012, 35(3): 611-623. [6] Babalola KO, Cootes TF, Twining CJ, et al. 3D brain segmentation using active appearance models and local regressors[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. New York: Springer, 2008: 401-408. [7] Rao A, Aljabar P, Rueckert D. Hierarchical statistical shape analysis and prediction of sub-cortical brain structures[J]. Medical Image Analysis, 2008, 12(1): 55-68. [8] Yang J, Duncan JS. 3D image segmentation of deformable objects with joint shape-intensity prior models using level sets[J]. Medical Image Analysis, 2004, 8(3): 285-294. [9] Powell S, Magnotta VA, Johnson H, et al. Registration and machine learning-based automated segmentation of subcortical and cerebellar brain structures[J]. Neuroimage, 2008, 39(1): 238-247. [10] Dolz J, Laprie A, Ken S, et al. Supervised machine learning-based classification scheme to segment the brainstem on MRI in multicenter brain tumor treatment context[J]. International Journal of Computer Assisted Radiology and Surgery, 2016, 11(1): 43-51. [11] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 3431-3440. [12] Wu J, Zhang Y, Tang X. A multi-atlas guided 3D fully convolutional network for MRI-based subcortical segmentation[C]//2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). Venice: IEEE, 2019: 705-708. [13] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Munich: Springer, 2015: 234-241. [14] i?ek ?, Abdulkadir A, Lienkamp S S, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Athens: Springer, 2016: 424-432. [15] Dolz J, Desrosiers C, Ayed IB. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study[J]. NeuroImage, 2018, 170: 456-470. [16] Le H, Borji A. What are the receptive, effective receptive, and projective fields of neurons in convolutional neural networks?[EB/OL]. https://arxiv.org/abs/1705.07049, 2018-04-07/2020-04-20. [17] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778. [18] Chen H, Dou Q, Yu L, et al. VoxResNet: Deep voxelwise residual networks for brain segmentation from 3D MR images[J]. NeuroImage, 2018, 170: 446-455. [19] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 4700-4708. [20] Jégou S, Drozdzal M, Vazquez D, et al. The one hundred layers tiramisu: fully convolutional densenets for semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Honolulu: IEEE, 2017: 11-19. [21] Yu F, Koltun V. Multi-scale context aggregation by dilated convolutions[EB/OL]. https://arxiv.org/abs/1511.07122, 2016-04-30/2020-04-20. [22] Chen LC, Papandreou G, Schroff F, et al. Rethinking atrous convolution for semantic image segmentation[EB/OL]. https://arxiv.org/abs/1706.05587, 2017-12-05/2020-04-20. [23] Chen LC, Papandreou G, Kokkinos I, et al. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 40(4): 834-848. [24] Wang P, Chen P, Yuan Y, et al. Understanding convolution for semantic segmentation[C]//2018 IEEE winter conference on applications of computer vision (WACV). Lake Tahoe: IEEE, 2018: 1451-1460. [25] Wachinger C, Reuter M, Klein T. DeepNAT: Deep convolutional neural network for segmenting neuroanatomy[J]. NeuroImage, 2018, 170: 434-445. [26] He K, Zhang X, Ren S, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification[C]//Proceedings of the IEEE international Conference on Computer Vision. Santiago: IEEE, 2015: 1026-1034. [27] Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift[EB/OL]. https://arxiv.org/abs/1502.03167, 2015-05-02/2020-04-20. [28] Kingma DP, Ba J. Adam: A method for stochastic optimization[EB/OL]. https://arxiv.org/abs/1412.6980, 2017-01-30/2020-04-20. [29] Dolz J, Gopinath K, Yuan J, et al. HyperDense-Net: A hyper-densely connected CNN for multi-modal image segmentation[J]. IEEE Transactions on Medical Imaging, 2018, 38(5): 1116-1126. |
|
|
|