|
|
Vessel Segmentation in Coronary Angiography Images Based on Deep Convolution and Multi-Level Scale Feature Fusion |
Xu Yang1,2,3, Zhai Nannan1,2,3, Ni Weizhen1,2,3, Tan Qiang4, Wang Jinjia1,2,3* |
1(College of Information Science and Engineering, Yanshan University, Qinhuangdao 066000, Hebei, China) 2(Hubei Key Laboratory of Intelligent Robot (Wuhan Institute of Technology), Wuhan 430205, China) 3(Yangtze River Delta HIT Robot Technology Research Institute, Wuhu 241000, Anhui, China) 4(Department of Cardiology, The First Hospital of Qinhuangdao, Qinhuangdao 066002, Hebei, China) |
|
|
Abstract Coronary angiography is a significant diagnostic and therapeutic modality for coronary heart disease and other cardiovascular diseases. The accurate and expeditious segmentation of blood vessels is of paramount importance to the diagnosis and treatment of cardiovascular diseases. Existing coronary angiography vessel segmentation algorithms have been shown to have several shortcomings, including a weak segmentation ability for fine vessels, poor connectivity of segmented vessels, and a lack of resistance to noise and artefacts. This study proposes an enhanced U-shape segmentation network, termed HAM-UNet.UNet, which utilises the advantages of the Transformer structure′s long-distance dependence and cross-domain hopping connectivity. The proposed methods include contextual hierarchical aggregation and multiscale feature fusion. Firstly, a series of image preprocessing methods are employed to enhance certain features of the original coronary angiography images and expand the experimental data. Then, the preprocessed images are segmented by the HAM-UNet method.The encoder combines both deep convolution and residual structure, which can efficiently capture global features and effectively enhance the network detail perception, thus improving the segmentation accuracy while increasing the segmentation connectivity. The decoder performs multi-scale feature fusion and up-sampling hopping connections, improving the global perception of the network and reducing the influence of irrelevant information.The datasets used are from 221 images from the General Hospital of Tianjin Medical University and 494 images from the First Hospital of Qinhuangdao City. On both datasets, the HAM-UNet algorithm achieves an accuracy of 0.983 and 0.998, respectively. As demonstrated in Figure 8, the IOUs are 0.857 and 0.908, and the Dice scores are 0.842 and 0.883, respectively. This indicates that the overall segmentation performance is superior to that of U-Net, Att-UNet and other algorithms.
|
Received: 17 May 2024
|
|
Corresponding Authors:
*E-mail:wjj@ysu.edu.cn
|
|
|
|
[1] Liu Wentao, Yang Huihua, Tian Tong, et al. Full-resolution network and dual-threshold iteration for retinal vessel and coronary angiograph segmentation[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(9): 4623-4634. [2] Gao Zijun, Wang Lu, Soroushmehr R, et al. Vessel segmentation for X-ray coronary angiography using ensemble methods with deep learning and filter-based features[J]. BMC Medical Imaging, 2022, 22(1): 10. [3] Fu Zeyu, Fu Zhuang, Lu Chenzhuo, et al. Robust implementation of foreground extraction and vessel segmentation for X-ray coronary angiography image sequence[J]. Pattern Recognition, 2024, 145: 109926. [4] Du Hongwei, Shao Kai, Bao Fangxun, et al. Automated coronary artery tree segmentation in coronary CTA using a multiobjective clustering and toroidal model-guided tracking method[J]. Computer Methods and Programs in Biomedicine, 2021, 199: 105908. [5] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[EB/OL]. https://arxiv.org/abs/1409.1556, 2015-04-10/2024-05-17. [6] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(4):640-651. [7] Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer, 2015: 234-241. [8] Oktay O, Schlemper J, Folgoc LL, et al. Attention u-net: Learning where to look for the pancreas[EB/OL]. https://arxiv.org/abs/1804.03999, 2018-05-20/2024-05-17. [9] Zhou Zongwei, Siddiquee MMR, Tajbakhsh N, et al. UNet++: redesigning skip connections to exploit multiscale features in image segmentation[J]. IEEE Transactions on Medical Imaging, 2019, 39(6): 1856-1867. [10] Xian Zhanchao, Wang Xiaoqing, Yan Shaodi, et al. Main coronary vessel segmentation using deep learning in smart medical[J]. Mathematical Problems in Engineering, 2020, 2020: 1-9. [11] Zhang Xinyue, Du Hongwei, Song Gao, et al. X-ray coronary centerline extraction based on C-UNet and a multifactor reconnection algorithm[J]. Computer Methods and Programs in Biomedicine, 2022, 226: 107114. [12] Wang Dong, Zhang Zhao, Zhao Ziwei, et al. PointScatter: point set representation for tubular structure extraction[C]//European Conference on Computer Vision. Cham: Springer Nature, 2022: 366-383. [13] 江中川, 吴云. 位置感知循环卷积与多尺度输入的视网膜血管分割方法[J]. 计算机工程与应用, 2023, 59(21):242-250. [14] 殷晓航, 王永才, 李德英. 基于 U-Net 结构改进的医学影像分割技术综述[J]. 软件学报, 2020, 32(2): 519-550. [15] Shen Yuqiang, Chen Zhe, Tong Jijun, et al. DBCU-Net: deep learning approach for segmentation of coronary angiography images[J]. The International Journal of Cardiovascular Imaging, 2023, 39(8): 1571-1579. [16] He Fengxiang, Liu Tongliang, Tao Dacheng. Why resnet works? residuals generalize[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 31(12): 5349-5362. [17] Ibtehaz N, Kihara D. ACC-UNet: a completely convolutional unet model for the 2020s[C]//International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer Nature, 2023: 692-702. [18] Vaswani A,Shazeer N,Parmar N,et al. Attention is all you need[C]// Proceedings of the 3lst International Conference on Neural Information Processing Systems. Red Hook: CurranAssociates Inc, 2017:6000-6010. [19] Wang Haonan, Cao Peng, Wang Jiaqi, et al. UCTransNet: rethinking the skip connections in U-Net from a channel-wise perspective with transformer[C]//Proceedings of the 36th AAAI Conference on Artificial Intelligence.Vancouver, AAAI,2022, 36(3): 2441-2449. [20] Zhou SK, Greenspan H, Davatzikos C, et al. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises[J]. Proceedings of the IEEE, 2021, 109(5): 820-838. [21] 巫笠平, 段晓鹏, 马玉良, 等. 基于改进 U-Net 网络的多囊卵巢图像分割方法[J]. 中国生物医学工程学报, 2022, 41(6): 663-671. [22] 纪建兵, 陈纾, 杨媛媛. 双重降维通道注意力门控 U-Net 的胰腺 CT 分割[J]. 中国生物医学工程学报, 2023, 42(3): 281-288. [23] Ibtehaz N, Rahman MS. MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation[J]. Neural Networks, 2020, 121: 74-87. [24] 傅励瑶, 尹梦晓, 杨锋. 基于 Transformer 的 U 型医学图像分割网络综述[J]. 计算机应用, 2023, 43(5): 1584. [25] Howard AG. Mobilenets: efficient convolutional neural networks for mobile vision applications[EB/OL]. https://www.arxiv.org/abs/1704.04861, 2017-04-17/2024-05-17. [26] Qiu Yue, Chai Senchun, Zhu Enjun, et al. Deep multi-scale dilated convolution network for coronary artery segmentation[J]. Biomedical Signal Processing and Control, 2024, 92: 106021. |
[1] |
Zhang Xinfeng, Yin Wenbin, Fang Jinpeng, Zhang Xinmei. An Observation Method of Human-Computer Interaction for Identification of Target MoleculesBased on U-Net Convolutional Neural Network[J]. Chinese Journal of Biomedical Engineering, 2024, 43(5): 582-595. |
[2] |
Zhou Lazhen, Chen Hongchi, Li Qiuxia, Li Fangzuo. Research Progress on Transformer-Based Deep Learning Models for Medical Image Segmentation[J]. Chinese Journal of Biomedical Engineering, 2024, 43(4): 467-476. |
|
|
|
|