| 
					
						|  |  
    					|  |  
    					| A Colorectal Segmentation Method Based on U-Net Improved with Identical Design |  
						| Shen Zhiqiang, Lin Chaonan, Pan Lin, Nie Weiyu, Pei Yue, Huang Liqin, Zheng Shaohua* |  
						| (College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, Fujian, China) |  
						|  |  
					
						| 
								
									| 
											
                        					 
												
													
													    |  |  
														| 
													
													    | Abstract  Colonoscopy is a widely used technique for colon screening and polyp lesions diagnosis. Nevertheless, manual screening using colonoscopy suffers from a miss rate around 25% of polyps. Deep learning-based computer-aided diagnosis (CAD) for polyp detection has potentials of reducing the human errors. Polyp detection depends on encoder-decoder network (U-Net) for polyp segmentation. However, U-Net has two limitations, one is that the semantic gap exists between the feature maps from the encoder and decoder; the other one is convolutional layers in the encoder-decoder processing units fail to extract multi-scale information. In this work, we proposed an identical network (I-Net) to tackle the problems in a consolidated manner. The I-Net introduced identical units (IU) both in skip connections and encoder-decoder sub-networks of U-Net to reduce the semantic gap. Meanwhile, motivated by the dense and residual connections, we designed a dense residual unit (DRU) to learn multi-scale information. Finally, DRI-Net was developed by initializing IU to DRU, which not only alleviated the semantic gap between the encoder and the decoder but also learned multi-scale features. We evaluated the proposed methods on the CVC-ClinicDB dataset containing 612 colonoscopy images through five-fold cross validation. Experimental results demonstrated that the DRI-Net achieved Dice coefficient of 90.06% and intersection over union (IoU) of 85.52%. Compared to the U-Net, DRI-Net improved the Dice coefficient of 8.50% and IoU of 11.03%. In addition, we studied the generalization of the proposed methods on International Skin Imaging Collaboration (ISIC) 2017 dataset including a training set of 2 000 dermoscopy images for model training and a test set of 600 images for model evaluation. The study indicated that the I-Net achieved Dice coefficient of 86.57% and IoU of 79.20%. Compared to the first-place solution on ISIC 2017 leaderboard, the DRI-Net improved Dice coefficient of 1.67% and IoU of 2.70%. In conclusion, the results demonstrated that DRI-Net effectively overcome the limitations of U-Net and improved the segmentation accuracy in the polyp segmentation task, and showed the great generalization capability on other modality data. |  
															| Received: 13 April 2021 |  
															|  |  
															| Corresponding Authors:
																* E-mail: sunphen@fzu.edu.cn |  |  |  |  
													
																												  
															| [1] Sung H, Ferlay J, Siegel R, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries [J]. CA: A Cancer Journal for Clinicians, 2021, 71(3): 209-249. [2] Siegel RL, Miller KD, Goding Sauer A, et al. Colorectal cancer statistics, 2020 [J]. CA: A Cancer Journal for Clinicians, 2020, 70(3): 145-164.
 [3] Shussman N, Wexner SD. Colorectal polyps and polyposis syndromes [J]. Gastroenterology Report, 2014, 2(1): 1-15.
 [4] Leufkens AM, Van M, Vleggaar FP, et al. Factors influencing the miss rate of polyps in a back-to-back colonoscopy study [J]. Endoscopy, 2012, 44(5): 470-475.
 [5] Prasath VB. Polyp detection and segmentation from video capsule endoscopy: a review [J]. Journal of Imaging, 2017, 3: 1.
 [6] Yao J, Miller M, Franaszek M, et al. Colonic polyp segmentation in CT colonography-based on fuzzy clustering and deformable models [J]. IEEE Transactions on Medical Imaging, 2004, 23(11): 1344-1352.
 [7] Sanchez-Gonzalez A, Garcia-Zapirain B, Sierra-Sosa D, et al. Automatized colon polyp segmentation via contour region analysis [J]. Computers in Biology and Medicine, 2018, 100: 152-164.
 [8] Yuan Yixuan, Li Dengwang, Meng M. Automatic polyp detection via a novel unified bottom-up and top-down saliency approach [J]. IEEE Journal of Biomedical and Health Informatics, 2017, 22(4): 1250-1260.
 [9] Van C, Van VF, Vos FM, et al. Detection and segmentation of colonic polyps on implicit isosurfaces by second principal curvature flow [J]. IEEE Transactions on Medical Imaging, 2010, 29(3): 688-698.
 [10] Huang Gao, Liu Zhuang, Van L, et al. Densely connected convolutional networks [C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 4700-4708.
 [11] He Kaiming, Zhang Xiangyu, Ren Shaoqun, et al. Deep residual learning for image recognition [C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas : IEEE, 2016: 770-778.
 [12] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation [C] //Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 3431-3440.
 [13] Ren Shaoqun, He Kaiming, Girshick R, et al. Faster r-cnn: Towards real-time object detection with region proposal networks [C] //Advances in Neural Information Processing Systems. Montréal: NeurIPS, 2015, 28: 91-99.
 [14] Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation [C] //International Conference on Medical Image Computing and Computer-assisted Intervention. Munich: Springer, 2015: 234-241.
 [15] Zhou Zongwei, Siddiquee M, Tajbakhsh N, et al. Unet++: a nested u-net architecture for medical image segmentation [C] //Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham: Springer, 2018: 3-11.
 [16] Zhou Zongwei, Siddiquee M, Tajbakhsh N, et al. Unet++: Redesigning skip connections to exploit multiscale features in image segmentation [J]. IEEE Transactions on Medical Imaging, 2019, 39(6): 1856-1867.
 [17] Ibtehaz N, Rahman MS. MultiResUNet: rethinking the U-Net architecture for multimodal biomedical image segmentation [J]. Neural Networks, 2020, 121: 74-87.
 [18] Bernal J, Sánchez FJ, Fernández-Esparrach G, et al. WM-DOVA maps for accurate polyp highlighting in colonoscopy: validation vs. saliency maps from physicians [J]. Computerized Medical Imaging and Graphics, 2015, 43: 99-111.
 [19] Fernández-Esparrach G, Bernal J, López-Cerón M, et al. Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps [J]. Endoscopy, 2016, 48(9): 837-842.
 [20] Milletari F, Navab N, Ahmadi SA. V-net: fully convolutional neural networks for volumetric medical image segmentation [C]//2016 Fourth International Conference on 3D Vision (3DV). Stanford: IEEE, 2016: 565-571.
 [21] Lin T, Goyal P, Girshick R, et al. Focal loss for dense object detection [C] //Proceedings of the IEEE International Conference on Computer Vision. Venice: IEEE, 2017: 2980-2988.
 [22] Paszke A, Gross S, Massa F, et al. Pytorch: An imperative style, high-performance deep learning library [C] //Advances in Neural Information Processing Systems. Vancouver: NeurIPS, 2019, 32: 8026-8037.
 [23] Yu F, Wang D, Shelhamer E, et al. Deep layer aggregation [C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018: 2403-2412.
 [24] Bi L, Kim J, Ahn E, et al. Step-wise integration of deep class-specific learning for dermoscopic image segmentation [J]. Pattern Recognition, 2019, 85: 78-89.
 [25] Tang Peng, Liang Qiaokang, Yan Xintong, et al. Efficient skin lesion segmentation using separable-Unet with stochastic weight averaging [J]. Computer Methods and Programs in Biomedicine, 2019, 178: 289-301.
 [26] Zheng S, Nie W, Pan L, et al. A dual-attention V-network for pulmonary lobe segmentation in CT scans [J]. IET Image Processing, 2021, 15(8): 1644-1654.
 |  
													
														
															| 
																																																																																																										
																					| [1] | Sun Dewei, Wang Zhigang, Yang Xiaolin, Meng Xiangfu. Prediction Tumor Mutation Burden of Lung Adenocarcinoma Based on Deep Learning[J]. Chinese Journal of Biomedical Engineering, 2021, 40(6): 681-690. |  
																					| [2] | Zhang Tengyu, Zhang Jingsha, Xu Gongcheng, Wang Zheng, Zhang Xuemin, Li Zengyong. Establishment of a Rehabilitation Training Prescription Recommended Model for Stroke Based on Brain Function and Movement Assessment[J]. Chinese Journal of Biomedical Engineering, 2021, 40(4): 394-400. |  |  
											 
											 |  |  |