Classification of Breast Tumors Based on Dual-Branch Multi-View Transformer
Liu Yiyao1,2, Yang Yi3, Chen Minsi1,2, Wang Tianfu1,2, Jiang Wei3*, Lei Baiying1,2*
1(Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China) 2(Department of Biomedical Engineering, School of Medicine, Shenzhen University, Shenzhen 518060, Guangdong, China) 3(Department of Ultrasonics, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen 518052, Guangdong,China)
Abstract:Automatic breast volume scanner (ABVS) system is the primary screening method for breast cancer because of its high efficiency and no radiation. The study of computer-aided breast cancer classification based on ABVS images is helpful for clinicians to diagnose breast cancer accurately and quickly and can even help to improve the diagnostic level of junior doctors. Because of its imaging mode, ABVS system produces a large amount of three-dimensional breast image data, leading to a long training time and huge resources of conventional deep learning. Therefore, we designed a multi-view image extraction method based on ABVS data, which replaced the conventional 3D data and made up for the spatial correlation in 2D deep learning while reducing the number of parameters. Secondly, based on the spatial position relationship of cross view images, we proposed a self-attention encoder (Transformer) to obtain effective feature expression of the images. Our experiment was based on 153 volume images from our own ABVS database. The accuracy of benign and malignant classification was 86.88%, the F1 score was 81.70% and AUC reached 0.831 6. The experimental results indicated that the proposed method could be effectively applied to the benign and malignant screening of breast tumors based on ABVS images.
柳懿垚, 杨意, 陈敏思, 汪天富, 姜伟, 雷柏英. 基于双分支多视角深度自注意力编码器的乳腺肿瘤分类方法[J]. 中国生物医学工程学报, 2022, 41(5): 527-536.
Liu Yiyao,Yang Yi,Chen Minsi,Wang Tianfu,Jiang Wei*,Lei Baiying,*. Classification of Breast Tumors Based on Dual-Branch Multi-View Transformer. Chinese Journal of Biomedical Engineering, 2022, 41(5): 527-536.
[1] Sung H, Ferlay J, Siegel RL, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries [J]. CA: A Cancer Journal for Clinicians, 2021, 71(3): 209-249. [2] 罗年安, 董瑞, 屈亚琦等. 乳腺癌患者预后的相关因素分析 [J]. 现代生物医学进展, 2015, 15(26): 5056-5058. [3] Saslow D, Hannan J, Osuch J, et al. Clinical breast examination: practical recommendations for optimizing performance and reporting [J]. CA: A Cancer Journal for Clinicians, 2004, 54(6): 327-344. [4] Gøtzsche PC, Jørgensen KJ. Screening for breast cancer with mammography [J]. Cochrane Database of Systematic Reviews, 2013, 2013(6):CD001877. [5] 荣雪余, 姜玉新. 超声在乳腺肿瘤诊断中的作用 [J]. 中华超声影像学杂志, 2000, 2000(5): 62-63. [6] Schmachtenberg C, Fischer T, Hamm B, et al. Diagnostic performance of automated breast volume scanning (ABVS) compared to handheld ultrasonography with breast MRI as the gold standard [J]. Academic Radiology, 2017, 24(8): 954-961. [7] Wang Y, Choi EJ, Choi Y, et al. Breast cancer classification in automated breast ultrasound using multiview convolutional neural network with transfer learning [J]. Ultrasound in Medicine & Biology, 2020, 46(5): 1119-1132. [8] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision [C] // Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2016: 2818-2826. [9] 孔小函, 檀韬, 包凌云, 等. 基于卷积神经网络和多信息融合的三维乳腺超声分类方法 [J]. 中国生物医学工程学报, 2018, 37(4): 414-422. [10] Pang T, Wong JHD, Ng WL, et al. Semi-supervised GAN-based radiomics model for data augmentation in breast ultrasound mass classification [J]. Computer Methods and Programs in Biomedicine, 2021, 203: 106018. [11] Zhou Y, Chen H, Li Y, et al. Multi-task learning for segmentation and classification of tumors in 3D automated breast ultrasound images [J]. Medical Image Analysis, 2021, 70: 101918. [12] Xiang H, Huang YS, Lee CH, et al. 3-D Res-CapsNet convolutional neural network on automated breast ultrasound tumor diagnosis [J]. European Journal of Radiology, 2021, 138: 109608. [13] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition [C] // Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2016: 770-778. [14] Simonyan K, and Zisserman A. Very deep convolutional networks for large-scale image recognition [EB/OL]. https://arxiv.org/abs/1409.1556, 2015-04-10/2021-11-27. [15] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks [C] // Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 4700-4708. [16] Iandola FN, Han S, Moskewicz MW, et al. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size [EB/OL]. https://arxiv.org/abs/1602.07360, 2015-11-04/2021-11-27. [17] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need [C] // Proceedings of the Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2017: 5998-6008. [18] Dosovitskiy A, Beyer L, Kolesnikov A, et al. An image is worth 16x16 words: transformers for image recognition at scale [EB/OL]. https://arxiv.org/abs/2010.11929, 2021-06-03/2021-11-27. [19] Liu Z, Lin Y, Cao Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C] // Proceedings of the IEEE/CVF International Conference on Computer Vision. Piscataway: IEEE, 2021: 10012-10022. [20] Fushiki T Estimation of prediction error by using K-fold cross-validation [J]. Statistics and Computing, 2011, 21: 137-146. [21] McKinley S, Levine M. Cubic spline interpolation [J]. College of the Redwoods, 1998, 45(1): 1049-1060. [22] Paszke A, Gross S, Massa F, et al. Pytorch: an imperative style, high-performance deep learning library [J]. Advances in Neural Information Processing Systems, 2019, 32: 8026-8037. [23] Kingma DP, Ba J. Adam: a method for stochastic optimization [EB/OL]. https://arxiv.org/abs/1412.6980, 2017-01-30/2021-11-27. [24] Zhang Z, Sabuncu MR. Generalized cross entropy loss for training deep neural networks with noisy labels[C]//Proceedings of the Advances in Neural Information Processing Systems. Cambridge: MIT Press, 2018, 31:8778-8788. [25] Powers DM. Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation [EB/OL]. https://arxiv.org/abs/2010.16061, 2020-10-11/2021-11-27. [26] Selvaraju RR, Cogswell M, Das A, et al. Grad-cam: visual explanations from deep networks via gradient-based localization [C] // Proceedings of the IEEE International Conference on Computer Vision. Piscataway: IEEE, 2017: 618-626.