|
|
Real-Time Target Detection of Abnormal Regions in Gastrointestinal Endoscopy Based on GE-YOLO |
Fan Shanhui1, Lai Jintao1, Wei Shangguang1, Wei Kaihua1, Fan Yihong2, Lv Bin2, Li Lihua1* |
1(School of Automation (School of Artificial Intelligence), Hangzhou Dianzi University, Hangzhou 310018, China) 2(Department of Gastroenterology, Zhejiang Provincial Hospital of Traditional Chinese Medicine, Hangzhou 310006, China) |
|
|
Abstract Gastrointestinal endoscopy is a common clinical examination in early diagnosis and monitoring of gastrointestinal diseases. However, this examination needs to be operated by a professional doctor to identify lesions in real-time, it is extremely dependent on the doctor′s experience which is subjective and easy to cause missed and/or false detection. In this study, GE-YOLO, a real-time detection method for abnormal object under digestive endoscopy based on improved YOLOv7-tiny, was proposed. Using YOLOv7-tiny as the basic framework, the backbone feature extraction network was constructed by using two different feature extraction modules (C3 module and P-ELAN module) to improve the feature extraction capability of the network; and then the coordinate convolution (CoordConv) was used to replace the normal convolution in the up-sampling, which made the model localize the lesion more accurately; furthermore, partial convolution (PConv) was applied to replace the 3×3 convolution in the feature extraction module, which not only guarantee the model detection performance, but also greatly reduced the computation cost and parameter number, and improved the model detection speed; finally, a joint loss function based on IoU and normalized Wasserstein distance was used to make the model more sensitive to small lesions. This model was trained and tested on the labeled images (4 172 in total) in Kvasir-Capsule dataset. The average precision, recall and F1-score of GE-YOLO was 94.2%, 97.2% and 0.957, respectively, and the detection speed was 60 frames per second, which had an improvement of 2.8% in precision, 12.0% in recall and 0.075 in F1-score compared with the results achieved by YOLOv7-tiny. The promising results demonstrated this proposed method can achieve high-precision real-time diagnosis of digestive tract lesions, and is expected to be deployed in clinical endoscopy equipment to provide real-time assistance for doctors during the examination to improve the diagnostic efficiency, which has momentous clinical value and research significance.
|
Received: 07 September 2023
|
|
Corresponding Authors:
*E-mail: lilh@hdu.edu.cn
|
|
|
|
[1] 郑荣寿,孙可欣,张思维,等. 2016年中国恶性肿瘤流行情况分析[J].中华肿瘤杂志, 2023, 45(3): 212-220. [2] Siegel RL, Miller KD, Wagle NS, et al. Cancer statistics, 2023[J]. CA Cancer J Clin, 2023, 73(1): 17-48. [3] Miller KD, Siegel RL, Lin CC, et al. Cancer treatment and survivorship statistics, 2016[J]. CA Cancer J Clin, 2016;66(4):271-289. [4] Zhao S, Wang S, Pan P, et al. Magnitude, risk factors, and factors associated with adenoma miss rate of tandem colonoscopy: a systematic review and meta-analysis[J]. Gastroenterology, 2019, 156(6): 1661-1674. [5] Zhang Xu, Chen Fei, Yu Tao, et al. Real-time gastric polyp detection using convolutional neural networks[J]. PLOS ONE, 2019, 14(3): e0214133. [6] Hirasawa T, Aoyama K, Tanimoto T, et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images[J]. Gastric Cancer, 2018, 21(4): 653-660. [7] Jha D, Ali S, Tomar NK, et al. Real-time polyp detection, localization and segmentation in colonoscopy using deep learning[J]. IEEE Access, 2021, 9: 40496-40510. [8] Jha D, Tomar NK, Ali S, et al. Nanonet: real-time polyp segmentation in video capsule endoscopy and colonoscopy[C]//2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS). Aveiro: IEEE, 2021: 37-43. [9] Yamada M, Saito Y, Imaoka H, et al. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy[J]. Sci Rep, 2019, 9(1): 14465. [10] Luo Huiyan, Xu Guoliang, Li Chaofeng, et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study[J]. Lancet Oncol, 2019, 20(12): 1645-1654. [11] Kominami Y, Yoshida S, Tanaka S, et al. Computer-aided diagnosis of colorectal polyp histology by using a real-time image recognition system and narrow-band imaging magnifying colonoscopy[J]. Gastrointest Endosc, 2016, 83(3): 643-649. [12] Byrne MF, Chapados N, Soudan F, et al. Real-time differentiation of adenomatous and hyperplastic diminutive colorectal polyps during analysis of unaltered videos of standard colonoscopy using a deep learning model[J]. Gut, 2019, 68(1): 94-100. [13] Shin Y, Qadir HA, Aabakken L, et al. Automatic colon polyp detection using region based deep CNN and post learning approaches[J]. IEEE Access, 2018, 6: 40950-40962. [14] Shibata T, Teramoto A, Yamada H, et al. Automated detection and segmentation of early gastric cancer from endoscopic images using mask R-CNN[J]. Appl Sci, 2020, 10(11): 3842. [15] Nadimi ES, Buijs MM, Herp J, et al. Application of deep learning for autonomous detection and localization of colorectal polyps in wireless colon capsule endoscopy[J]. Comput Electr Eng, 2020, 81: 106531. [16] Zheng Yali, Zhang Ruikai, Yu Ruoxi, et al. Localisation of colorectal polyps by convolutional neural network features learnt from white light and narrow band endoscopic images of multiple databases[C]//2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Honolulu: IEEE, 2018: 4142-4145. [17] Wang Sen, Xing Yuxiang, Zhang Li, et al. A systematic evaluation and optimization of automatic detection of ulcers in wireless capsule endoscopy on a large dataset using deep convolutional neural networks[J]. Phys Med Biol, 2019, 64(23): 235014. [18] Yen HH, Wu Pingyu, Su Peiyuan, et al. Performance comparison of the deep learning and the human endoscopist for bleeding peptic ulcer disease[J]. J Med Biol Eng, 2021, 41(4): 504-513. [19] Otani K, Nakada A, Kurose Y, et al. Automatic detection of different types of small-bowel lesions on capsule endoscopy images using a newly developed deep convolutional neural network[J]. Endoscopy, 2020, 52(9): 786-791. [20] Ali S, Dmitrieva M, Ghatwary N, et al. Deep learning for detection and segmentation of artefact and disease instances in gastrointestinal endoscopy[J]. Med Image Anal, 2021, 70: 102002. [21] Ali S, Zhou F, Braden B, et al. An objective comparison of detection and segmentation algorithms for artefacts in clinical endoscopy[J]. Sci Rep, 2020, 10(1): 2748. [22] Saito H, Aoki T, Aoyama K, et al. Automatic detection and classification of protruding lesions in wireless capsule endoscopy images based on a deep convolutional neural network[J]. Gastrointest Endosc, 2020, 92(1): 144-151. [23] Souaidi M, El Ansari M. Multi-scale hybrid network for polyp detection in wireless capsule endoscopy and colonoscopy images[J]. Diagnostics, 2022, 12(8): 2030. [24] Wang CY, Bochkovskiy A, Liao HYM. YOLOv7: trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver: IEEE, 2023: 7464-7475. [25] Chen Jierun, Kao SH, He Hao, et al. Run, don’t walk: chasing higher FLOPs for faster neural networks[C]//2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vancouver: IEEE, 2023: 12021-12031. [26] Smedsrud PH, Thambawita V, Hicks SA, et al. Kvasir-Capsule, a video capsule endoscopy dataset[J]. Sci Data, 2021, 8(1): 142. [27] Liu R, Lehman J, Molino P, et al. An intriguing failing of convolutional neural networks and the coordconv solution[C]//32nd Conference on Neural Information Processing Systems. Montreal: NIPS, 2018: 9605-9616. [28] Wang Jinwang, Xu Chang, Yang Wen, et al. A normalized Gaussian Wasserstein distance for tiny object detection[DB/OL]. https://arxiv.org/abs/2110.13389, 2022-06-14/2023-08-13. [29] Redmon J, Farhadi A. YOLO9000: better, faster, stronger[C]//2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu: IEEE, 2017: 6517-6525. [30] Kingma DP, Ba J. Adam: a method for stochastic optimization[DB/OL]. https://arxiv.org/abs/1412.6980, 2017-01-30/2023-08-13. [31] Redmon J, Farhadi A. YOLOv3: an incremental improvement[DB/OL]. https://arxiv.org/abs/1804.02767, 2018-04-08/2023-08-13. [32] Bochkovskiy A, Wang CY, Liao HYM. YOLOv4: optimal speed and accuracy of object detection[DB/OL]. https://arxiv.org/abs/ 2004.10934, 2020-04-23/2023-08-13. [33] Wang CY, Yeh IH, Liao HYM. You only learn one representation: unified network for multiple tasks[DB/OL]. https://arxiv.org/abs/2105.04206, 2021-05-10/2023-08-13. [34] Liu Wei, Anguelov D, Erhan D, et al. SSD: single shot MultiBox detector[C]//14th European Conference on Computer Vision (ECCV 2016). Amsterdam: Springer, 2016: 21-37. [35] Ren Shaoqing, He Kaiming, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[C]//29th Annual Conference on Neural Information Processing Systems. Montreal: NIPS, 2015: 91-99. [36] Li H, Wu Y, Bai L, et al. Semi-supervised learning for segmentation of bleeding regions in video capsule endoscopy[DB/OL]. https://arxiv.org/abs/2308.02869, 2023-08-05/2023-08-13. [37] Fiaidhi J, Mohammed S, Zezos P. Thick data techniques for identifying abnormality in video frames for wireless capsule endoscopy[C]//2022 IEEE International Conference on Big Data. Osaka: IEEE, 2022: 5263-5268. [38] Bai Long, Wang Liangyu, Chen Tong, et al. Transformer-based disease identification for small-scale imbalanced capsule endoscopy dataset[J]. Electronics, 2022, 11(17): 2747. [39] Srivastava A, Tomar NK, Bagci U, et al. Video capsule endoscopy classification using focal modulation guided convolutional neural network[C]//2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS). Shenzen: IEEE, 2022: 323-328. [40] Said S, Youssef S, Elagamy MN. The use of capsule endoscopic examination videos in the detection of abnormalities in the gastrointestinal tract[C]//2022 5th International Conference on Communications, Signal Processing, and their Applications (ICCSPA). Cairo: IEEE, 2022: 1-7. [41] Biradher S, Aparna P. Classification of capsule endoscopy images based on feature concatenation of deep neural networks[C]//2021 Fourth International Conference on Electrical, Computer and Communication Technologies (ICECCT). Erode: IEEE, 2021: 1-4. [42] Gjestang HL, Hicks SA, Thambawita V, et al. A self-learning teacher-student framework for gastrointestinal image classification[C]//2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS). Aveiro: IEEE, 2021: 539-544. [43] Hollstensson M. Detecting gastrointestinal abnormalities with binary classification of the Kvasir-Capsule dataset: a TensorFlow deep learning study[D]. Kalmar: Linnaeus University, 2022. [44] Amiri Z, Hassanpour H, Beghdadi A. A computer-aided method for digestive system abnormality detection in WCE images[J]. J Healthc Eng, 2021, 2021: 7863113. |
[1] |
Jiang Shuying, Li Zhiming, Mo Xian, Sun Ang, Zhang Junran. Active Iterative Optimization of Leukocyte Image Classification Model with Fused Attention[J]. Chinese Journal of Biomedical Engineering, 2024, 43(4): 408-418. |
[2] |
Wang Wei, Xu Yuyan, Wang Xin, Huang Wendi, Yuan Ping. Research on Chest CT Image Classification Method Combining Attention Mechanism and Lightweight Convolutional Neural Network[J]. Chinese Journal of Biomedical Engineering, 2024, 43(4): 429-437. |
|
|
|
|