Abstract:The observation of molecules that play an important role in life activities is an important way to discover intrinsic mechanisms of the life activities. Most of existing biomedical image processing methods focus on the detection and identification of specific substances, however, it is difficult to adapt to changing demands of scientific research. To this end, this paper proposed a human-computer interaction method based on U-Net convolutional neural network to identify all the same molecules in biomedical images, such as nucleic cell, proteins, etc. First, the U-Net convolutional network was used to convert molecular images to deep feature maps, and then the features of the target molecules were used to match on the entire feature map to detect all the same molecules of interest. Then, the CSR-DCF (discriminative correlation filter with channel and spatial reliability) algorithm was used to build a multi-target tracker to achieve continuous tracking of the target molecules. Experimental results showed that the proposed method was able to quickly detect similar molecules of interest through simple human-computer interaction, and obtain important information on the number, distribution and interactions of target molecules. Attention-based U-Net and U-Net performed consistently on 200 static test images randomly selected from Nucleus, Human Protein Atlas, Bacteria and Blood Red Cell datasets, withaverage precision mean values of 0.912 5 and 0.898 1, respectively. At the same time, the tracking of targets in the dynamic images of mouse stem cells was accurate and stable, proving the effectiveness of the method to meet the needs of microscopic life process observation in life science research.
[1] 余辉, 杜培培, 刘祥, 等. 基于卷积神经网络的复合菌落智能分类识别 [J]. 中国生物医学工程学报, 2020, 39(1): 26-32. [2] 周浩, 王皞鹏, 杨海涛. 新冠病毒的自白 [J]. 科学(上海), 2021, 73(4): 44-48. [3] 钱伟丽, 刘敏, 李洁沁, 等. 基于局部图动态匹配的植物细胞追踪算法研究 [J]. 中国生物医学工程学报, 2018, 37(6): 649-656. [4] Sezgin M, Sankur B. Survey over image thresholding techniques and quantitative performance evaluation [J]. Journal of Electronic Imaging, 2004, 13(1): 146-165. [5] Cao Haichao, Liu Hong, Song E. A novel algorithm for segmentation of leukocytes in peripheral blood [J]. Biomedical Signal Processing and Control, 2018, 45: 10-21. [6] Meijering E. Cell segmentation: 50 years down the road [life sciences] [J]. IEEE Signal Processing Magazine, 2012, 29(5): 140-145. [7] Kothari S, Chaudry Q, Wang MD. Automated cell counting and cluster segmentation using concavity detection and ellipse fitting techniques [C]// 2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro. Boston: IEEE, 2009: 795-798. [8] Geng Qichuan, Zhou Zhong, Cao Xiaochun. Survey of recent progress in semantic image segmentation with CNNs [J]. Science China Information Sciences, 2018, 61(5): 1-18. [9] Bahmani B, Moseley B, Vattani A, et al. Scalable k-means++ [EB/OL]. https://arxiv.org/abs/1203.6402, 2012-03-29/2022-04-24. [10] Liu Hong, Cao Haichao, Song E. Bone marrow cells detection: A technique for the microscopic image analysis [J]. Journal of Medical Systems, 2019, 43(4): 1-14. [11] 刘聪, 董文飞, 蒋克明, 等. 基于改进分水岭分割算法的致密荧光微滴识别 [J]. 中国光学, 2019, 12(4): 783-790. [12] Ahmad Qureshi T, Hunter A, Al-Diri B. A bayesian framework for the local configuration of retinal junctions [C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 3105-3110. [13] Orlando JI, Prokofyeva E, Blaschko MB. A discriminatively trained fully connected conditional random field model for blood vessel segmentation in fundus images [J]. IEEE Transactions on Biomedical Engineering, 2016, 64(1): 16-27. [14] Ben-Cohen A, Diamant I, Klang E, et al. Fully convolutional network for liver segmentation and lesions detection [C]//Deep Learning and Data Labeling for Medical Applications. Athens: Springer, 2016: 77-85. [15] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation [C]// 2015 IEEE Conference on Computer Vision and Pattern Recognition. Boston: IEEE, 2015: 3431-3440. [16] Ronneberger O, Fischer P, Brox T. U-net:convolutional networks for biomedical image segmentation [C]// Medical Image Computing and Computer-Assisted Intervention. Munich: Springer, 2015: 234-241. [17] Gu Zaiwang, Cheng Jun, Fu Huazhu, et al. Ce-net: Context encoder network for 2d medical image segmentation [J]. IEEE Transactions on Medical Imaging, 2019, 38(10): 2281-2292. [18] Yin Wenbin, Zhang Xinfeng, Fang Jinpeng, et al. A human-computer interaction method based on U-net convolutional neural network for target molecule observation [C]// 14th International Conference on e-Health, Lisbon. Portugal: IADIS Press, 2022: 228-235. [19] Lukezic A, Vojir T, ˇCehovin Zajc L, et al. Discriminative correlation filter with channel and spatial reliability [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 6309-6318. [20] Fang Jinpeng, Zhang Xinfeng, Yang Bin, et al. An attention-based U-Net network for anomaly detection in crowded scenes [C]//2022 14th International Conference on Computer Research and Development (ICCRD). Shenzhen: IEEE, 2022: 202-206. [21] Rosenfeld A, Thurston M. Edge and curve detection for visual scene analysis [J]. IEEE Transactions on Computers, 1971, 100(5): 562-569. [22] 孟晓燕 段建民. 基于相关滤波的目标跟踪算法研究综述 [J]. 北京工业大学学报, 2020, 46(12): 1393-1416. [23] Schlemper J, Oktay O, Schaap M, et al. Attention gated networks: Learning to leverage salient regions in medical images [J]. Medical Image Analysis, 2019, 53(1): 179-207. [24] Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495. [25] Zijdenbos AP, Dawant BM, Margolin RA, et al. Morphometric analysis of white matter lesions in MR images: method and validation [J]. IEEE Transactions on Medical Imaging, 1994, 13(4): 716-724. [26] 叶阳, 沈冰雁, 沈毓琦. 基于生成对抗网络的抗阴影树木检测方法 [J]. 农业工程学报, 2021, 37(10): 118-126. [27] 孙若凡, 张唯唯. 基于3D U-Net实现人体耳软骨MRI图像的解剖结构分割 [J]. 中国生物医学工程学报, 2021, 40(5): 531-539. [28] Everingham M, Van Gool L, Williams CK, et al. The pascal visual object classes (voc) challenge [J]. International Journal of Computer Vision, 2010, 88(2): 303-338. [29] Redmon J, Divvala S, Girshick R, et al. You only look once: unified, real-time object detection [C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779-788. [30] Redmon J, Farhadi A. YOLO9000: better, faster, stronger [C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE, 2017: 7263-7271. [31] Redmon J, Farhadi A. Yolov3: An incremental improvement [EB/OL]. https://arxiv.org/abs/1804.02767, 2018-04-08/2022-04-24. [32] Bochkovskiy A, Wang CY, Liao HYM. Yolov4: optimal speed and accuracy of object detection [EB/OL]. https://arxiv.org/abs/2004.10934, 2020-08-23/2022-04-24. [33] Girshick R, Donahue J, Darrell T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation [C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. Columbus: IEEE, 2014: 580-587. [34] Ren S, He K, Girshick R, et al. Faster R-CNN: towards real-time object detection with region proposal networks [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6): 1137-1149. [35] Girshick R. Fast R-CNN [C]// 2015 IEEE International Conference on Computer Vision. Santiago: IEEE, 2015: 1440-1448.