1(School of Biomedical Engineering, Health Science Center, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China) 2(Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare, Hospital of Nanfang Medical University, Shenzhen 518060, Guangdong, China)
Abstract:Placental maturity grading (PMG) is very essential to assess fetal growth and maternal health. However, PMG has mostly relied on the clinician’s subjective judgment, which is time-consuming and subjective. A dditionally it may cause wrong estimation because of redundancy and repeatability of the process. Traditional machine learning-based methods capitalize on handcrafted features, but such features may be essentially insufficient for PMG. In order to tackle it, we proposed an automatic method to stage placental maturity via deep hybrid descriptors extracted from B-mode ultrasound (BUS) and color Doppler energy (CDE) images. Specifically, convolutional descriptors extracted from a deep convolutional neural network (CNN) and hand-crafted features were combined to form hybrid descriptors to boost the performance of the proposed method. Firstly, different models with various feature layers were combined to obtain hybrid descriptors from images. Meanwhile, the transfer learning strategy was utilized to enhance the grading performance based on the deep representation features. Then, extracted descriptors were encoded by Fisher vector (FV). Finally, we used support vector machine (SVM) as the classifier to grade placental maturity. We used placental data labeled by doctors to test models. The accuracy of the model with hybrid descriptors based on the 19-layer network was 94.15%, which was 3.01% higher than that of the model with hand-crafted features and 7.35% higher than the CNN feature model. The experimental results demonstrated that the proposed method could be applied to the automatic PMG effectively.
[1] Kellow Zina S, Feldstein Vickie A. Ultrasound of the placenta and umbilical cord: a review[J]. Ultrasound Quarterly, 2011,27(3): 187-197. [2] Chen Hao, Ni Dong, Qin Jing,et al. Standard plane localization in fetal ultrasound via domain transferred deep neural networks[J]. IEEE Journal of Biomedical and Health Informatics, 2015,19(5): 1627-1636. [3] D’hooge Jan, Heimdal A, Jamal F, et al. Regional strain and strain rate measurements by cardiac ultrasound: principles, implementation and limitations[J]. European Heart Journal-Cardiovascular Imaging, 2000,1(3): 154-170. [4] Lei Baiying, Li Wanjun, Yao Yuan, et al. Multi-modal and multi-layout discriminative learning for placental maturity staging[J]. Pattern Recognition, 2017,63: 719-730. [5] Lei Baiying, Tan Erleng, Chen Siping, et al. Automatic placental maturity grading via hybrid learning[J]. Neurocomputing, 2017,223: 86-102. [6] Dubiel M, Breborowicz Grzegorz H, Ropacka M, et al. Computer analysis of three-dimensional power angiography images of foetal cerebral, lung and placental circulation in normal and high-risk pregnancy[J]. Ultrasound in Medicine & Biology, 2005,31(3): 321-327. [7] Goldenberg RL, Gravett MG, Iams J, et al. The preterm birth syndrome: issues to consider in creating a classification system[J]. American Journal of Obstetrics and Gynecology, 2012,206(2): 113-118. [8] Grannum PAT, BerkowitzRL, Hobbins JC. The ultrasonic changes in the maturing placenta and their relation to fetal pulmonic maturity[J]. American Journal of Obstetrics and Gynecology, 1979,133(8): 915-922. [9] Lazebnik S, Schmid C, Ponce J. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories[C]//Computer Vision and Pattern Recognition. New York: IEEE, 2006: 2169-2178. [10] Jegou H, Perronnin F, Douze M, et al. Aggregating local image descriptors into compact codes[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2012,34(9): 1704-1716. [11] Lei Baiying, Li Xinyao, Yao Yuan, et al. Automatic grading of placental maturity based on LIOP and fisher vector[C]//IEEE Engineering in Medicine and Biology Society. Chicago: IEEE, 2014: 4671-4674. [12] Sánchez J, Perronnin F, Mensink T, et al. Image classification with the fisher vector: Theory and practice[J]. International Journal of Computer Vision, 2013,105(3): 222-245. [13] Li Xinyao, Yao Yuan, Ni Dong, et al. Automatic staging of placental maturity based on dense descriptor[J]. Bio-Medical Materials and Engineering, 2014,24(6): 2821-2829. [14] Lei Baiying, Tan Errleng, Chen Siping, et al. Automatic recognition of fetal facial standard plane in ultrasound image via fisher vector[J]. PLoS ONE, 2015,10(5): e0121838. [15] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]//International Conference on Neural Information Processing Systems. Nevada: NIPS, 2012: 1097-1105. [16] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. Computer Science, 2014. [17] Song Youyi, Zhang Ling, Chen Siping, et al. Accurate segmentation of cervical cytoplasm and-nuclei based on multiscale convolutional network and graph partitioning[J]. IEEE Transactions on Biomedical Engineering, 2015,62(10): 2421-2433. [18] Oquab M, Bottou L, Laptev I, et al. Learning and transferring mid-level image representations using convolutional neural networks[C]//Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 1717-1724. [19] Yosinski J, Clune J, Bengio Y, et al. How transferable are features in deep neural networks?[C]//International Conference on Neural Information Processing Systems. Montreal: NIPS, 2014: 3320-3328. [20] Donahue J, Jia Yangqing, Vinyals O, et al. DeCAF: A deep convolutional activation feature for generic visual Recognition[C]//International Conference on Machine Learning. Beijing: PMLR, 2014: 647-655. [21] Sharif Razavian A, Azizpour H, Sullivan J, et al. CNN features off-the-shelf: An astounding baseline for recognition[C]//Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 806-813. [22] He Kaiming, Zhang Xiangyu, Ren Shaoqing, et al. Deep residual learning for image recognition[C]// IEEE Conference on Computer Vision and Pattern Recognition. IEEE Computer Society. Islamabad: IEEE, 2016:770-778. [23] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015,39(6): 1137-1149. [24] Li Wanjun, Yao Yuan, Ni Dong, et al. Placental maturity evaluation via feature fusion and discriminative learning[C]//International Symposium on Biomedical Imaging. Prague: IEEE, 2016: 783-786.