logo logo
Deep Learning-Based Detection of Bone Tumors around the Knee in X-rays of Children. Journal of clinical medicine Even though tumors in children are rare, they cause the second most deaths under the age of 18 years. More often than in other age groups, underage patients suffer from malignancies of the bones, and these mostly occur in the area around the knee. One problem in the treatment is the early detection of bone tumors, especially on X-rays. The rarity and non-specific clinical symptoms further prolong the time to diagnosis. Nevertheless, an early diagnosis is crucial and can facilitate the treatment and therefore improve the prognosis of affected children. A new approach to evaluating X-ray images using artificial intelligence may facilitate the detection of suspicious lesions and, hence, accelerate the referral to a specialized center. We implemented a Vision Transformer model for image classification of healthy and pathological X-rays. To tackle the limited amount of data, we used a pretrained model and implemented extensive data augmentation. Discrete parameters were described by incidence and percentage ratio and continuous parameters by median, standard deviation and variance. For the evaluation of the model accuracy, sensitivity and specificity were computed. The two-entity classification of the healthy control group and the pathological group resulted in a cross-validated accuracy of 89.1%, a sensitivity of 82.2% and a specificity of 93.2% for test groups. Grad-CAMs were created to ensure the plausibility of the predictions. The proposed approach, using state-of-the-art deep learning methodology to detect bone tumors on knee X-rays of children has achieved very good results. With further improvement of the algorithm, enlargement of the dataset and removal of potential biases, this could become a useful additional tool, especially to support general practitioners for early, accurate and specific diagnosis of bone lesions in young patients. 10.3390/jcm12185960
Deep Learning for Classification of Bone Lesions on Routine MRI. Eweje Feyisope R,Bao Bingting,Wu Jing,Dalal Deepa,Liao Wei-Hua,He Yu,Luo Yongheng,Lu Shaolei,Zhang Paul,Peng Xianjing,Sebro Ronnie,Bai Harrison X,States Lisa EBioMedicine BACKGROUND:Radiologists have difficulty distinguishing benign from malignant bone lesions because these lesions may have similar imaging appearances. The purpose of this study was to develop a deep learning algorithm that can differentiate benign and malignant bone lesions using routine magnetic resonance imaging (MRI) and patient demographics. METHODS:1,060 histologically confirmed bone lesions with T1- and T2-weighted pre-operative MRI were retrospectively identified and included, with lesions from 4 institutions used for model development and internal validation, and data from a fifth institution used for external validation. Image-based models were generated using the EfficientNet-B0 architecture and a logistic regression model was trained using patient age, sex, and lesion location. A voting ensemble was created as the final model. The performance of the model was compared to classification performance by radiology experts. FINDINGS:The cohort had a mean age of 30±23 years and was 58.3% male, with 582 benign lesions and 478 malignant. Compared to a contrived expert committee result, the ensemble deep learning model achieved (ensemble vs. experts): similar accuracy (0·76 vs. 0·73, p=0·7), sensitivity (0·79 vs. 0·81, p=1·0) and specificity (0·75 vs. 0·66, p=0·48), with a ROC AUC of 0·82. On external testing, the model achieved ROC AUC of 0·79. INTERPRETATION:Deep learning can be used to distinguish benign and malignant bone lesions on par with experts. These findings could aid in the development of computer-aided diagnostic tools to reduce unnecessary referrals to specialized centers from community clinics and limit unnecessary biopsies. FUNDING:This work was funded by a Radiological Society of North America Research Medical Student Grant (#RMS2013) and supported by the Amazon Web Services Diagnostic Development Initiative. 10.1016/j.ebiom.2021.103402
Multitask Deep Learning for Segmentation and Classification of Primary Bone Tumors on Radiographs. von Schacky Claudio E,Wilhelm Nikolas J,Schäfer Valerie S,Leonhardt Yannik,Gassert Felix G,Foreman Sarah C,Gassert Florian T,Jung Matthias,Jungmann Pia M,Russe Maximilian F,Mogler Carolin,Knebel Carolin,von Eisenhart-Rothe Rüdiger,Makowski Marcus R,Woertler Klaus,Burgkart Rainer,Gersing Alexandra S Radiology Background An artificial intelligence model that assesses primary bone tumors on radiographs may assist in the diagnostic workflow. Purpose To develop a multitask deep learning (DL) model for simultaneous bounding box placement, segmentation, and classification of primary bone tumors on radiographs. Materials and Methods This retrospective study analyzed bone tumors on radiographs acquired prior to treatment and obtained from patient data from January 2000 to June 2020. Benign or malignant bone tumors were diagnosed in all patients by using the histopathologic findings as the reference standard. By using split-sample validation, 70% of the patients were assigned to the training set, 15% were assigned to the validation set, and 15% were assigned to the test set. The final performance was evaluated on an external test set by using geographic validation, with accuracy, sensitivity, specificity, and 95% CIs being used for classification, the intersection over union (IoU) being used for bounding box placements, and the Dice score being used for segmentations. Results Radiographs from 934 patients (mean age, 33 years ± 19 [standard deviation]; 419 women) were evaluated in the internal data set, which included 667 benign bone tumors and 267 malignant bone tumors. Six hundred fifty-four patients were in the training set, 140 were in the validation set, and 140 were in the test set. One hundred eleven patients were in the external test set. The multitask DL model achieved 80.2% (89 of 111; 95% CI: 72.8, 87.6) accuracy, 62.9% (22 of 35; 95% CI: 47, 79) sensitivity, and 88.2% (67 of 76; CI: 81, 96) specificity in the classification of bone tumors as malignant or benign. The model achieved an IoU of 0.52 ± 0.34 for bounding box placements and a mean Dice score of 0.60 ± 0.37 for segmentations. The model accuracy was higher than that of two radiologic residents (71.2% and 64.9%; = .002 and < .001, respectively) and was comparable with that of two musculoskeletal fellowship-trained radiologists (83.8% and 82.9%; = .13 and = .25, respectively) in classifying a tumor as malignant or benign. Conclusion The developed multitask deep learning model allowed for accurate and simultaneous bounding box placement, segmentation, and classification of primary bone tumors on radiographs. © RSNA, 2021 See also the editorial by Carrino in this issue. 10.1148/radiol.2021204531
A comparative analysis of CNN-based deep learning architectures for early diagnosis of bone cancer using CT images. Scientific reports Bone cancer is a rare in which cells in the bone grow out of control, resulting in destroying the normal bone tissue. A benign type of bone cancer is harmless and does not spread to other body parts, whereas a malignant type can spread to other body parts and might be harmful. According to Cancer Research UK (2021), the survival rate for patients with bone cancer is 40% and early detection can increase the chances of survival by providing treatment at the initial stages. Prior detection of these lumps or masses can reduce the risk of death and treat bone cancer early. The goal of this current study is to utilize image processing techniques and deep learning-based Convolution neural network (CNN) to classify normal and cancerous bone images. Medical image processing techniques, like pre-processing (e.g., median filter), K-means clustering segmentation, and, canny edge detection were used to detect the cancer region in Computer Tomography (CT) images for parosteal osteosarcoma, enchondroma and osteochondroma types of bone cancer. After segmentation, the normal and cancerous affected images were classified using various existing CNN-based models. The results revealed that AlexNet model showed a better performance with a training accuracy of 98%, validation accuracy of 98%, and testing accuracy of 100%. 10.1038/s41598-024-52719-8
Malignant Bone Tumors Diagnosis Using Magnetic Resonance Imaging Based on Deep Learning Algorithms. Medicina (Kaunas, Lithuania) : Malignant bone tumors represent a major problem due to their aggressiveness and low survival rate. One of the determining factors for improving vital and functional prognosis is the shortening of the time between the onset of symptoms and the moment when treatment starts. The objective of the study is to predict the malignancy of a bone tumor from magnetic resonance imaging (MRI) using deep learning algorithms. : The cohort contained 23 patients in the study (14 women and 9 men with ages between 15 and 80). Two pretrained ResNet50 image classifiers are used to classify T1 and T2 weighted MRI scans. To predict the malignancy of a tumor, a clinical model is used. The model is a feed forward neural network whose inputs are patient clinical data and the output values of T1 and T2 classifiers. : For the training step, the accuracies of 93.67% for the T1 classifier and 86.67% for the T2 classifier were obtained. In validation, both classifiers obtained 95.00% accuracy. The clinical model had an accuracy of 80.84% for training phase and 80.56% for validation. The receiver operating characteristic curve (ROC) of the clinical model shows that the algorithm can perform class separation. : The proposed method is based on pretrained deep learning classifiers which do not require a manual segmentation of the MRI images. These algorithms can be used to predict the malignancy of a tumor and on the other hand can shorten the time of their diagnosis and treatment process. While the proposed method requires minimal intervention from an imagist, it needs to be tested on a larger cohort of patients. 10.3390/medicina58050636
Deep learning-based diagnostic models for bone lesions: is current research ready for clinical translation? European radiology 10.1007/s00330-023-10555-w
A radiograph-based deep learning model improves radiologists' performance for classification of histological types of primary bone tumors: A multicenter study. European journal of radiology PURPOSE:To develop a deep learning (DL) model for classifying histological types of primary bone tumors (PBTs) using radiographs and evaluate its clinical utility in assisting radiologists. METHODS:This retrospective study included 878 patients with pathologically confirmed PBTs from two centers (638, 77, 80, and 83 for the training, validation, internal test, and external test sets, respectively). We classified PBTs into five categories by histological types: chondrogenic tumors, osteogenic tumors, osteoclastic giant cell-rich tumors, other mesenchymal tumors of bone, or other histological types of PBTs. A DL model combining radiographs and clinical features based on the EfficientNet-B3 was developed for five-category classification. The area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity were calculated to evaluate model performance. The clinical utility of the model was evaluated in an observer study with four radiologists. RESULTS:The combined model achieved a macro average AUC of 0.904/0.873, with an accuracy of 67.5 %/68.7 %, a macro average sensitivity of 66.9 %/57.2 %, and a macro average specificity of 92.1 %/91.6 % on the internal/external test set, respectively. Model-assisted analysis improved accuracy, interpretation time, and confidence for junior (50.6 % vs. 72.3 %, 53.07[s] vs. 18.55[s] and 3.10 vs. 3.73 on a 5-point Likert scale [P < 0.05 for each], respectively) and senior radiologists (68.7 % vs. 75.3 %, 32.50[s] vs. 21.42[s] and 4.19 vs. 4.37 [P < 0.05 for each], respectively). CONCLUSION:The combined DL model effectively classified histological types of PBTs and assisted radiologists in achieving better classification results than their independent visual assessment. 10.1016/j.ejrad.2024.111496
Deep learning-based classification of primary bone tumors on radiographs: A preliminary study. He Yu,Pan Ian,Bao Bingting,Halsey Kasey,Chang Marcello,Liu Hui,Peng Shuping,Sebro Ronnie A,Guan Jing,Yi Thomas,Delworth Andrew T,Eweje Feyisope,States Lisa J,Zhang Paul J,Zhang Zishu,Wu Jing,Peng Xianjing,Bai Harrison X EBioMedicine BACKGROUND:To develop a deep learning model to classify primary bone tumors from preoperative radiographs and compare performance with radiologists. METHODS:A total of 1356 patients (2899 images) with histologically confirmed primary bone tumors and pre-operative radiographs were identified from five institutions' pathology databases. Manual cropping was performed by radiologists to label the lesions. Binary discriminatory capacity (benign versus not-benign and malignant versus not-malignant) and three-way classification (benign versus intermediate versus malignant) performance of our model were evaluated. The generalizability of our model was investigated on data from external test set. Final model performance was compared with interpretation from five radiologists of varying level of experience using the Permutations tests. FINDINGS:For benign vs. not benign, model achieved area under curve (AUC) of 0•894 and 0•877 on cross-validation and external testing, respectively. For malignant vs. not malignant, model achieved AUC of 0•907 and 0•916 on cross-validation and external testing, respectively. For three-way classification, model achieved 72•1% accuracy vs. 74•6% and 72•1% for the two subspecialists on cross-validation (p = 0•03 and p = 0•52, respectively). On external testing, model achieved 73•4% accuracy vs. 69•3%, 73•4%, 73•1%, 67•9%, and 63•4% for the two subspecialists and three junior radiologists (p = 0•14, p = 0•89, p = 0•93, p = 0•02, p < 0•01 for radiologists 1-5, respectively). INTERPRETATION:Deep learning can classify primary bone tumors using conventional radiographs in a multi-institutional dataset with similar accuracy compared to subspecialists, and better performance than junior radiologists. FUNDING:The project described was supported by RSNA Research & Education Foundation, through grant number RSCH2004 to Harrison X. Bai. 10.1016/j.ebiom.2020.103121
Primary bone tumor detection and classification in full-field bone radiographs via YOLO deep learning model. European radiology OBJECTIVES:Automatic bone lesions detection and classifications present a critical challenge and are essential to support radiologists in making an accurate diagnosis of bone lesions. In this paper, we aimed to develop a novel deep learning model called You Only Look Once (YOLO) to handle detecting and classifying bone lesions on full-field radiographs with limited manual intervention. METHODS:In this retrospective study, we used 1085 bone tumor radiographs and 345 normal bone radiographs from two centers between January 2009 and December 2020 to train and test our YOLO deep learning (DL) model. The trained model detected bone lesions and then classified these radiographs into normal, benign, intermediate, or malignant types. The intersection over union (IoU) was used to assess the model's performance in the detection task. Confusion matrices and Cohen's kappa scores were used for evaluating classification performance. Two radiologists compared diagnostic performance with the trained model using the external validation set. RESULTS:In the detection task, the model achieved accuracies of 86.36% and 85.37% in the internal and external validation sets, respectively. In the DL model, radiologist 1 and radiologist 2 achieved Cohen's kappa scores of 0.8187, 0.7927, and 0.9077 for four-way classification in the external validation set, respectively. The YOLO DL model illustrated a significantly higher accuracy for intermediate bone tumor classification than radiologist 1 (95.73% vs 88.08%, p = 0.004). CONCLUSIONS:The developed YOLO DL model could be used to assist radiologists at all stages of bone lesion detection and classification in full-field bone radiographs. KEY POINTS:• YOLO DL model can automatically detect bone neoplasms from full-field radiographs in one shot and then simultaneously classify radiographs into normal, benign, intermediate, or malignant. • The dataset used in this retrospective study includes normal bone radiographs. • YOLO can detect even some challenging cases with small volumes. 10.1007/s00330-022-09289-y