Prediction Model of Tumor Mutation Burden for Lung Adenocarcinoma Based on Pathological Tissue Slice
Meng Xiangfu1*, Yang Ziyi1, Yang Xiaolin2, Hou Jiayue3
1(School of Electronics and Information Engineering, Liaoning Technical University, Huludao 125000, Liaoning, China) 2(Institute of Basic Medical Sciences, Chinese Academy of Medical Sciences, School of Basic Medicine, Peking Union Medical College, Beijing 100005,China) 3(School of Communications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210023, China)
Abstract:Lung cancer is one of the deadliest malignancies, particularly non-small cell lung cancer (NSCLC), poses a significant threat to public health. Recent medical research has found out the crucial role of tumor mutation burden (TMB) in predicting the efficacy of immunotherapy and chemotherapy for cancer treatment. However, traditional methods for calculating TMB through genetic sequencing suffer from drawbacks, such as high detection costs, lengthy processing periods, and sample dependency. To address above problems, this paper proposed a novel deep learning model named as FCA-Former, which combined convolutional neural networks and self-attention mechanisms to predict TMB. The model employed CoAtNet as a backbone network, integrating coordinate attention and depth wise separable convolutions to enhance computational efficiency and global feature extraction capabilities from pathological tissue biopsy images. Experimental data sourced from the TCGA database comprised a dataset of lung adenocarcinoma digital pathology images, including 271 samples with high TMB levels and 66 samples with low TMB levels. The experimental results demonstrated the effectiveness of the proposed approach, achieving a remarkable maximumarea under the curve (AUC) of 98.1%. This AUC outperformed the state-of-the-art RcaNet method by 9.8%. The results of this study have significant implications for guiding prognostic and therapeutic strategies for NSCLC patients.
孟祥福, 杨子毅, 杨啸林, 侯嘉玥. 基于病理组织切片的肺腺癌肿瘤突变负荷预测模型[J]. 中国生物医学工程学报, 2023, 42(6): 698-709.
Meng Xiangfu, Yang Ziyi, Yang Xiaolin, Hou Jiayue. Prediction Model of Tumor Mutation Burden for Lung Adenocarcinoma Based on Pathological Tissue Slice. Chinese Journal of Biomedical Engineering, 2023, 42(6): 698-709.
[1] Xia C, Dong X, Li H, et al. Cancer statistics in China and United States, 2022: profiles, trends, and determinants[J]. Chinese Medical Journal, 2022, 135(5): 584-590. [2] 中国临床肿瘤学会非小细胞肺癌专家委员会,中国临床肿瘤学会血管靶向治疗专家委员会.肿瘤突变负荷应用于肺癌免疫治疗的专家共识 [J]. 中国肺癌杂志, 2021, 24(11): 743-752. [3] Lawrence MS, Stojanov P, Polak P, et al. Mutational heterogeneity in cancer and the search for new cancer-associated genes[J]. Nature, 2013, 499(7457): 214-218. [4] Alexandrov LB, Nik-Zainal S, Wedge DC, et al. Signatures of mutational processes in human cancer[J]. Nature, 2013, 500(7463): 415-421. [5] Rizvi NA, Hellmann MD, Snyder A, et al. Mutational landscape determines sensitivity to PD-1 blockade in non-small cell lung cancer[J]. Science, 2015, 348(6230): 124-128. [6] Carbone DP, Reck M, Paz-Ares L, et al. First-line nivolumab in stage IV or recurrent non-small-cell lung cancer[J]. New England Journal of Medicine, 2017, 376(25): 2415-2426. [7] Hellmann MD, Ciuleanu TE, Pluzanski A, et al. Nivolumab plus ipilimumab in lung cancer with a high tumor mutational burden[J]. New England Journal of Medicine, 2018, 378(22): 2093-2104. [8] Gunduz C, Yener B, Gultekin SH. The cell graphs of cancer[J]. Bioinformatics, 2004, 20(Suppl-1): i145-i151. [9] Altunbay D, Cigir C, Sokmensuer C, et al. Color graphs for automated cancer diagnosis and grading[J]. IEEE Transactions on Biomedical Engineering, 2009, 57(3): 665-674. [10] Xu Y, Mo T, Feng Q, et al. Deep learning of feature representation with multiple instance learning for medical image analysis[C]//2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Florence: IEEE, 2014: 1626-1630. [11] Cao R, Yang F, Ma SC, et al. Development and interpretation of a pathomics-based model for the prediction of microsatellite instability in colorectal cancer [J]. Theranostics, 2020, 10(24): 11080-11091. [12] Zhang T, Feng Y, Zhao Y, et al. MSHT: Multi-stage hybrid transformer for the ROSE image analysis of pancreatic cancer[J]. IEEE Journal of Biomedical and Health Informatics, 2023, 27(4): 1946-1957. [13] Shao Z,Bian H,Chen Y,et al. Transmil:transformer based correlated multiple instance learning for whole slide image classification[C]//Advances in Neural Information Processing Systems. Montreal : MIT Press , 2021:2136-2147. [14] 孙德伟, 王志刚, 杨啸林, 等. 基于深度学习的肺腺癌肿瘤突变负荷的预测 [J]. 中国生物医学工程学报, 2021, 40(6): 681-690. [15] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 2818-2826. [16] 刘邓, 杨啸林, 孟祥福. RcaNet:一种预测肿瘤突变负荷的深度学习模型 [J]. 中国生物医学工程学报, 2023, 42(1): 51-61. [17] Dai Z, Liu H, Le QV, et al. Coatnet: Marrying convolution and attention for all data sizes[C]//Advances in Neural Information Processing Systems, Montreal: MIT Press, 2021: 3965-3977. [18] Tan M, Le Q. Efficientnetv2: Smaller models and faster training[C]//International Conference on Machine Learning. Vienna: ACM, 2021: 10096-10106. [19] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 770-778. [20] Howard AG, Zhu M, Chen B, et al. Mobilenets: efficient convolutional neural networks for mobile vision applications[EB/OL]. https://arxiv.org/abs/1704.04861,2017-04-17/2023-02-02. [21] Tan M, Le Q. Efficientnet: rethinking model scaling for convolutional neural networks[C]//International Conference on Machine Learning. Long Beach: ACM, 2019: 6105-6114. [22] Hou Q, Zhou D, Feng J. Coordinate attention for efficient mobile network design[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Kuala Lumpur: IEEE, 2021: 13713-13722. [23] Vaswani A,Shazeer N,Parmar N,et al. Attention is all you need [C]//The 31st Conference on Neural Information Processing Systems. Long Beach: Neural Information Processing Systems,2017: 1-11. [24] Dosovitskiy A,Beyer L,Kolesnikov A,et al.An image is worth 16×16 words: transformers for image recognition at scale[C]//The 9th International Conference on Learning Representations. Vienna: Elsevier,2021:1-21. [25] Loschilov I, Hutter F. Decoupled weight decay regularization [C]// International Conference on Learning Representations (ICLR). New Orleans: IEEE, 2019:1-18. [26] Liang X. Image-based post-disaster inspection of reinforced concrete bridge systems using deep learning with Bayesian optimization[J]. Computer-Aided Civil and Infrastructure Engineering, 2019, 34(5): 415-430. [27] Snoek J, Rippel O, Swersky K, et al. Scalable bayesian optimization using deep neural networks[C]//International Conference on Machine Learning. Lille: ACM, 2015: 2171-2180. [28] Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks[J]. Communications of the ACM, 2017, 60(6): 84-90. [29] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[C]// Proceedings of the International Conference on Learning Representations. San Diego: IEEE, 2014: 1-14. [30] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Hawaii: IEEE, 2017: 4700-4708. [31] Chou HP, Chang SC, Pan JY, et al. Remix: rebalanced mixup[C]//Computer Vision-ECCV 2020 Workshops. Glasgow: Springer International Publishing, 2020: 95-110. [32] Liu Y, Sangineto E, Bi W, et al. Efficient training of visual transformers with small datasets[C]//Advances in Neural Information Processing Systems. Montreal: MIT Press , 2021: 23818-23830. [33] Chen CFR, Fan Q, Panda R. Crossvit: cross-attention multi-scale vision transformer for image classification[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. Kyoto: IEEE, 2021: 357-366.