|
|
Fetal Facial Standard Plane Recognition via Deep Convolutional Neural Networks |
Yu Zhen1, Wu Lingyun1, Ni Dong1, Chen Siping1, Li Shengli2, Wang Tianfu1*, Lei Baiying1* |
1(Schoolof Biomedical Engineering, Shenzhen University, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China) 2(Department of Ultrasound, Affiliated Shenzhen Maternal and Child Healthcare, Hospital of Nanfang Medical University, Shenzhen 518060, Guangdong, China) |
|
|
Abstract The accurate recognition of fetal facial standard plane (FFSP) (i.e., axial, coronal and sagittal plane) from ultrasound (US) images is quite essential for routine US examination. Since the labor-intensive and subjective measurement is too time-consuming and unreliable, the development of the automatic FFSP recognition method is highly desirable. In this paper, we proposed to recognize FFSP using different depth CNN architectures (e.g., 8-layer and 16-layer). Specifically, we trained these models varied from depth to depth and mainly utilize two training strategy: 1) training the “CNN from scratch” with random initialization; 2) performing transfer learning strategy by fine-tuning ImageNet pre-trained CNN on our FFSP dataset. In our experiments, fetal gestational ages ranged typically from 20 to 36 weeks. Our training dataset contains 4849 images (i.e., 375 axial plane images, 257 coronal plane images, 405 sagittal plane images and 3812 non-FFSP images). Our testing dataset contained 2 418 images (i.e., 491 axial plane images, 127 coronal plane images, 174 sagittal plane images, and 1626 non-FFSP images). The experiment indicated that the strategy of transfer learning combined with CNN improving recognition accuracy by 9.29%. When CNN depth changes from 8 layer to 16 layer, it improves the recognition accuracy by 3.17%. The best recognition accuracy of our CNN model was 94.5%, which was 3.66% higher than our previous study. The effectiveness of deep CNN and transfer learning for FFSP recognition shows promising application for clinical diagnosis.
|
Received: 08 June 2016
|
|
|
|
|
[1] Lei Baiying, Zhuo Liu, Chen Siping, et al. Automatic recognition of fetal standard plane in ultrasound image [C]//International Symposium on Biomedical Imaging. Beijing: IEEE, 2014:85-88. [2] Chen Hao, Dou Qi, Ni Dong, et al. Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks [C] // Medical Image Computing and Computer-Assisted Intervention. Munich:Springer International Publishing, 2015: 507-514. [3] Chen Hao, Ni Dong, Qin Jing, et al. Standard plane localization in fetal ultrasound via domain transferred deep neural networks[J]. IEEE J Biomed Health Inf, 2015. 19(5): 1627-1636. [4] Lei Baiying, Tan Eeleng, Chen Siping, et al. Automatic recognition of fetal facial standard plane in ultrasound image via fisher vector[J]. PLoS ONE, 2015, 10(5): e0121838. [5] Lei Baiying, Yao Yuan, Chen Siping, et al. Discriminative learning for automatic staging of placental maturity via multi-layer fisher vector[J]. Scientific Reports, 2015. 5: 12818. [6] Rahmatullah B, Papageorghiou A, Noble J. Automated selection of standardized planes from ultrasound volume[C] //Machine Learning in Medical Imaging.Toronto: Springer Berlin Heidelberg, 2011: 35-42. [7] Zhang Ling, Chen Siping, Chin CT, et al. Intelligent scanning: automated standard plane selection and biometric measurement of early gestational sac in routine ultrasound examination[J]. Medical Physics, 2012. 39(8): 5015-5027. [8] Ni Dong, Li Tianmei, Yang Xin, et al. Selective search and sequential detection for standard plane localization in ultrasound[C] //Medical Image Computing and Computer-Assisted Intervention. Nagoya: Springer Berlin Heidelberg, 2013: 203-211. [9] Deng Jia, Dong Wei, Socher R, et al. Imagenet: A large-scale hierarchical image database[C]//Computer Vision and Pattern Recognition. Anchorage: IEEE, 2009: 248-255. [10] Szegedy C, Liu Wei, Jia Yangqing, et al.Going deeper with convolutions[C]//Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9. [11] Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks[C]//Neural Information Processing Systems. Lake Tahoe: Nips Foundation, 2012: 1097-1105. [12] Simonyan K, Zisserman A. Very deep convolutional networks for large scale image recognition[J].Computer Science, 2014. [13] Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Trans Neural Netw, 1994, 5(2): 157-166. [14] Hochreiter S. The vanishing gradient problem during learning recurrent neural nets and problem solutions[J]. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 1998. 6(02): 107-116. [15] Hinton G, Osindero S, The Y. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527-1554. [16] Zeiler M, Fergus R. Visualizing and understanding convolutional networks[C]//Computer Vision-ECCV. Zürich: Springer International Publishing, 2014: 818-833. [17] Shin H, Roth H, Gao Mingchen, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning[J]. IEEE Trans on Medl Imaging, 2016, 35(5): 1285-1298. [18] Donahue J, Jia Yangqing, Vinyals O, et al. Decaf: A deep convolutional activation feature for generic visual recognition[C]//International Conference on Machine Learning. JMLR.org, 2014: 1-647. [19] Razavian A, Azizpour H, Sullivan J, et al. CNN features off-the-shelf: an astounding baseline for recognition[C]//Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 806-813. [20] Yosinski J, Clune J, Bengio Y, et al. How transferable are features in deep neural networks?[C]//Neural Information Processing Systems. Montréal: Nips Foundation, 2014: 3320-3328. [21] Vedaldi A, Lenc K. MatConvNet: Convolutional neural networks for matlab[C]//The ACM International Conference. ACM, 2015:689-692. [22] Maaten L, Hinton G. Visualizing data using t-SNE[J]. J Mach Learn Res, 2008. 9: 2579-2605. [23] Yann LC, Yoshua B. Geoffrey H. Deep learning[J]. Nature, 2015. 521(7553):436-444. |
|
|
|