|
|
Research Progress of Medical Image Recognition Based on Deep Learning |
Liu Fei ,Zhang Junran* ,Yang Hao |
School of Electrical Engineering and Information Sichuan University, Chengdu 610065, China |
|
|
Abstract In recent years, with the rapid development of medical imaging technology, medical image analysis has entered the era of big data. How to extract useful information from a large number of medical image data has become one great challenge to medical image recognition. Deep learning is a new field of machine learning, conventional machine learning method can’t effectively extract enough information contained in the medical image, while the deep learning has the power of establishing a hierarchical model, powerful automatic feature extraction, complex model building and efficient feature expression through the simulation of the human brain. More importantly, deep learning method can extract the features from the bottom to the top level from the original data of the pixel level, which provides a new way to solve the new problems faced by medical image recognition. Based on a large number of domestic and foreign literatures, this paper elaborated the three methods of depth learning, enumerated three common implementation models of deep learning methods, and introduced the training process of depth learning. We summarized the application of deep learning in two aspects of disease detection and classification and lesions recognition, and summarized the two common problems in the application of deep learning in medical image recognition. The analysis and prospects of deep learning in medical image recognition problems were proposed and discussed as well.
|
Received: 02 December 2016
|
|
|
|
|
[1] May M. Life science technologies: Big biological impacts from big data[J]. Science, 2014, 344(6189): 1298-1300. [2] Hinton GE, Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks[J]. Science, 2006, 313(5786): 504-507. [3] 周志华. 机器学习[M]. 北京:清华大学出版社, 2016:1-18. [4] Hinton GE, Osindero S, Teh YW. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7): 1527-1554. [5] Bengio Y, Lamblin P, Dan P, et al. Greedy layer-wise training of deep networks[C] // International Conference on Neural Information Processing Systems. Kitakyushu: Computer Science, 2007:153-160. [6] Suk HI, Lee SW, Shen D, et al. Latent feature representation with stacked auto-encoder for AD/MCI diagnosis[J]. Brain Structure and Function, 2015, 220(2): 841-859. [7] Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups[J]. IEEE Signal Processing Magazine, 2012, 29(6): 82-97. [8] Wei Xu, Rudnicky A. Language modeling for dialog system[C] // International Conference on Spoken Language Processing. Beijing: DBLP, 2000:118-121. [9] Mikolov T, Deoras A, Povey D, et al. Strategies for training large scale neural network language models[C] //Automatic Speech Recognition and Understanding. Providence: IEEE, 2012:196-201. [10] Hinton GE. Modeling pixel means and covariances using factorized third-order Boltzmann machines[C] // 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco: IEEE, 2010: 2551-2558. [11] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 2818-2826 [12] Kivinen J, Williams C, Heess N. Visual boundary prediction: A deep neural prediction network and quality dissection[J]. Journal of Machine Learning Research: Workshop and Conference Proceedings, 2014,33: 512-521. [13] Kris MG, Gucalp A, Epstein AS, et al. Assessing the performance of Watson for oncology, a decision support system, using actual contemporary clinical cases[J]. 2015 33(15):8023-8023. [14] Haykin S, 著, 申富饶, 徐烨, 郑俊, 等译. 神经网络与机器学习[M]. 第三版. 北京: 机械工业出版社, 2011:1-25. [15] Mason L, Baxter J, Bartlett P, et al. Boosting algorithms as gradient descent[C] // International Conference on Neural Information Processing Systems. Cambridge: MIT Press, 2000:512-518. [16] Suykens, Johan AK, and Joos Vandewalle. Least squares support vector machine classifiers[J]. Neural Processing Letters, 1999, 9(3): 293-300. [17] Huang, Fu Jie, and Yann LeCun. Large-scale Learning with SVM and Convolutional for Generic Object Categorization[C] // IEEE Computer Society Conference on Computer Vision and Pattern Recognition. New York: IEEE, 2006:284-291. [18] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks[C] // International Conference on Neural Information Processing Systems. Lake Tahoe: Curran Associates Inc, 2012:1097-1105. [19] Chen J, Deng L. A primal-dual method for training recurrent neural networks constrained by the echo-state property[J]. Proc Int Conf Learning Representations, 2013, 2013(420): 629201-629201. [20] Graves A, Jaitly N. Towards end-to-end speech recognition with recurrent neural networks[C] // Proceedings of the 31st International Conference on Machine Learning. Beijing: ICML, 2014: 1764-1772. [21] Yu D, Wang S, Karam Z, et al. Language recognition using deep-structured conditional random fields[C] //IEEE International Conference on Acoustics Speech and Signal Processing. Dallas: IEEE, 2010:5030-5033. [22] Seltzer Michael L, Droppo J. Multi-task learning in deep neural networks for improved phoneme recognition[C] // 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Vancouver: IEEE, 2013: 6965-6969. [23] Zhang K, Zuo W, Chen Y, et al. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising [J]. IEEE Transactions on Image Processing, 2017, 26(7):3142-3154. [24] Kim JH, Lee SW, Kwak D, et al. Multimodal residual learning for visual QA[C] // Advances in Neural Information Processing Systems. Barcelona: MIT Press, 2016: 361-373. [25] Wang Xiao Gang. Deep learning in image recognition[J]. Communications of the CCF, 2015, 11(8): 15-23. [26] 邓力, 俞栋. 深度学习:方法及应用[M]. 北京: 机械工业出版社, 2016.3-4. [27] Lecun Y, Bengio Y, Hinton G. Deep learning[J]. Nature, 2015, 521(7553):436-444. [28] 孙志远, 鲁成祥, 史忠植, 等. 深度学习研究与进展[J]. 计算机科学, 2016, 43(2):1-8. [29] Goodfellow I, Mirza M, Courville A, et al. Multi-prediction deep Boltzmann machines[C] //Advances in Neural Information Processing Systems. Lake Tahoe: MIT Press, 2013: 548-556. [30] Gens R, Domingos P. Discriminative learning of sum-product networks[C] // Advances in Neural Information Processing Systems. Lake Tahoe: MIT Press, 2012: 3239-3247. [31] Deng L, Li X. Machine learning paradigms for speech recognition: An overview[J]. IEEE Transactions on Audio, Speech, and Language Processing, 2013, 21(5): 1060-1089. [32] Vinyals O, Jia Y, Deng L, et al. Learning with recursive perceptual representations[C] // Advances in Neural Information Processing Systems. Lake Tahoe: MIT Press, 2012: 2825-2833. [33] Graves A, Jaitly N, Mohamed A. Hybrid speech recognition with deep bidirectional LSTM[C]// Automatic Speech Recognition and Understanding. Olomouc: IEEE, 2013: 273-278. [34] Yu D, Deng L. Deep-Structured Hidden Conditional Random Fields for Phonetic Recognition[C]//Conference of the International Speech Communication Association. Makuhari: BBLP, 2010: 2986-2989. [35] Deng L, Yu D. Deep learning: methods and applications[J]. Foundations & Trends in Signal Processing, 2013, 7(3):197-387. [36] Dahl GE, Yu D, Deng L, et al. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition[J]. IEEE Transactions on Audio, Speech, and Language Processing, 2012, 20(1): 30-42. [37] Pascanu R, Mikolov T, Bengio Y. On the difficulty of training Recurrent Neural Networks[J]. Computer Science, 2012, 52(3):III-1310. [38] Imseng D, Motlicek P, Garner PN, et al. Impact of deep MLP architecture on different acoustic modeling techniques for under-resourced speech recognition[C] //Automatic Speech Recognition and Understanding. Olomouc: IEEE, 2013: 332-337. [39] Bengio Y. Learning deep architectures for AI[J]. Foundations and Trends in Machine Learning, 2009, 2(1): 1-127. [40] Bengio Y, Delalleau O. On the expressive power of deep architectures[M] // Algorithmic Learning Theory. Berlin: Springer, 2011:18-36. [41] 曲建岭, 杜辰飞, 邸亚洲, 等. 深度自动编码器的研究与展望[J]. 计算机与现代化, 2014, 8(228):128-134. [42] Salakhutdinov R, Hinton GE. Deep Boltzmann Machines[J]. Journal of Machine Learning Research, 2009, 5(2):1967-2006. [43] 山世光, 阚美娜, 刘昕,等. 深度学习:多层神经网络的复兴与变革[J]. 科技导报, 2016, 34(14):60-70. [44] Szegedy C, Liu Wei, Jia Y, et al. Going deeper with convolutions[C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 1-9. [45] Chilimbi T, Suzue Y, Apacible J, et al. Project Adam: building an efficient and scalable deep learning training system[C] //Usenix Conference on Operating Systems Design and Implementation. Berkeley: USENIX Association, 2014:571-582. [46] Vincent P, Larochelle H, Bengio Y, et al. Extracting and composing robust features with denoising autoencoders[C] //Proceedings of the 25th international conference on Machine learning. Helsinki: ICML, 2008: 1096-1103. [47] 李渊, 骆志刚, 管乃洋, 等. 生物医学数据分析中的深度学习方法应用[J]. 生物化学与生物物理进展, 2016 43(5):472-483. [48] Dean J, Corrado GS, Monga R, et al. Large scale distributed deep networks[C] // International Conference on Neural Information Processing Systems. New York: Curran Associates Inc, 2012:1223-1231. [49] Mcclelland J. Learning internal representations by error propagation[J]. Readings in Cognitive Science, 1988, 1(2):399-421. [50] Burges C, Shaked T, Renshaw E, et al. Learning to rank using gradient descent[C]//Proceedings of the 22nd International Conference on Machine Learning. Bonn: ICML, 2005: 89-96. [51] Johnson R, Zhang T. Accelerating stochastic gradient descent using predictive variance reduction[C]//International Conference on Neural Information Processing Systems. New York: Curran Associates Inc, 2013:315-323. [52] Hinton GE, Dayan P, Frey BJ, et al. The" wake-sleep" algorithm for unsupervised neural networks[J]. Science, 1995, 268(5214): 1158-1169. [53] Sarraf S, Tofighi G. Classification of alzheimer's disease using fmri data and deep learning convolutional neural networks[J]. IEEE Transactions on Medical Imaging, 2016, 29(3): 1026-1031 [54] Li F, Tran L, Thung KH, et al. A robust deep model for improved classification of AD/MCI patients[J]. IEEE Journal of Biomedical & Health Informatics, 2015, 19(5):1610-1616. [55] Summers RM. Progress in fully automated abdominal CT interpretation[J]. American Journal of Roentgenology, 2016, 207(1): 67-79. [56] Sirinukunwattana K, Raza SEA, Tsang YW, et al. Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1196-1206. [57] Dou Q, Chen H, Yu L, et al. Automatic Detection of Cerebral Microbleeds From MR Images via 3D Convolutional Neural Networks[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1182-1195. [58] Shin HC, Roth HR, Gao M, et al. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning[J]. IEEE Transactions on Medical Imaging, 2016, 35(5):1285-1298. [59] Dai W, Yang Q, Xue GR, et al. Boosting for transfer learning[C] // Proceedings of the 24th International Conference on Machine Learning. Corvallis: ICML, 2007: 193-200. [60] Charalambous CC, Bharath AA. A data augmentation methodology for training machine/deep learning gait recognition algorithms[J]. IEEE Transactions on Medical Imaging, 2016, 24(10): 1016-1027. [61] Roth H, Lu L, Liu J, et al. Improving computer-aided detection using convolutional neural networks and random view aggregation[J]. IEEE Transactions on Medical Imaging, 2015, 35(5):1170-1181. [62] Srivastava N. Improving Neural Networks with Dropout[D]. Toronto: University of Toronto, 2013. [63] Wan L, Zeiler M, Zhang S, et al. Regularization of neural networks using dropconnect[C] //Proceedings of the 30th International Conference on Machine Learning. Atlanta: ICML, 2013: 1058-1066. [64] Setio A A, Ciompi F, Litjens G, et al. Pulmonary nodule detection in CT images: false positive reduction using multi-view convolutional networks[J]. IEEE Transactions on Medical Imaging, 2016, 35(5):1160-1169. [65] Li Q, Cai W, Wang X, et al. Medical image classification with convolutional neural network[C] // International Conference on Control Automation Robotics & Vision. Marina Bay: IEEE, 2014:844-848. [66] Chakdar K, Potetz B. Deep learning for the semiautomated analysis of pap smears[J]. Medical Applications of Artificial Intelligence, 2014, 18(1): 193-213. [67] Kondo T, Takao S, Ueno J. The 3-dimensional medical image recognition of right and left kidneys by deep GMDH-type neural network[C] // Intelligent Informatics and Biomedical Sciences (ICIIBMS). Rhodes: IEEE, 2015: 313-320. [68] Gao X, Lin S, Wong TY. Automatic feature learning to grade nuclear cataracts based on deep learning[J]. IEEE Transactions on Biomedical Engineering, 2015, 62(11): 2693-2701. [69] Yan Z, Zhan Y, Peng Z, et al. Multi-instance deep learning: discover discriminative local anatomies for bodypart recognition[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1332-1343. [70] Plis SM, Hjelm DR, Salakhutdinov R, et al. Deep learning for neuroimaging: a validation study[J]. Frontiers in Neuroscience, 2013, 8(8):229-240. [71] Brosch T, Yoo Y, Li DKB, et al. Modeling the variability in brain morphology and lesion distribution in multiple sclerosis by deep learning [M]// Medical Image Computing and Computer-Assisted Intervention-MICCAI 2014. Beilin: Springer International Publishing, 2014:462-469. [72] Xu J, Xiang L, Liu Q, et al. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images[J]. IEEE Transactions on Medical Imaging, 2016, 35(1): 119-130. [73] Kooi T, Litjens G, Van GB, et al. Large scale deep learning for computer aided detection of mammographic lesions.[J]. Medical Image Analysis, 2017, 35(24):303-312. [74] Cruz-Roa, Angel Alfonso, Ovalle, et al. A deep learning architecture for image representation, visual interpretability and automated basal-cell carcinoma cancer detection[C] // International Conference on Medical Image Computing and Computer-Assisted Intervention. Nagoya: Springer-Verlag, 2013: 403-410. [75] Grinsven MJJPV, Ginneken BV, Hoyng CB, et al. Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1273-1284. [76] Roth HR, Lu L, Seff A, et al. A new 2.5D representation for lymph node detection using random sets of deep convolutional neural network observations[J]. Medical Image Computing and Computer-Assisted Intervention, 2014, 17(1):520-527. [77] Ypsilantis PP, Siddique M, Sohn HM, et al. Predicting response to neoadjuvant chemotherapy with PET imaging using convolutional neural networks[J]. PLoS ONE, 2015, 10(9): 1-18. [78] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C] // Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE,2015: 770-778. [79] Russakovsky O, Deng J, Su H, et al. Image net large scale visual recognition challenge[J]. International Journal of Computer Vision, 2015, 115(3):211-252. [80] Huynh BQ, Li H, Giger ML. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks[J]. Journal of Medical Imaging, 2016, 3(3): 034501. [81] Tajbakhsh N, Shin JY, Gurudu SR, et al. Convolutional neural networks for medical image analysis: Full training or fine tuning?[J]. IEEE Transactions on Medical Imaging, 2016, 35(5): 1299-1312. |
[1] |
Chen Chang, Cheng Shaojie, Li Weibin ,Chen Min. A Peripheral Blood WBC Classification with ConvolutionalNeural Network[J]. Chinese Journal of Biomedical Engineering, 2018, 37(1): 17-24. |
[2] |
Yan Wen, Tang Ye ,Chang Eric I-Chao, Lai Maode ,Xu Yan. Deep Learning in Digital Pathology Analysis[J]. Chinese Journal of Biomedical Engineering, 2018, 37(1): 95-105. |
|
|
|
|