Please use this identifier to cite or link to this item:
https://hdl.handle.net/11499/57806
Title: | Deep convolutional neural network for automated staging of periodontal bone loss severity on bite-wing radiographs: an eigen-cam explainability mapping approach | Authors: | Ertürk, Mediha Öziç, Muhammet Usame Tassoker, Melek |
Keywords: | Artificial intelligence Bite-wing Deep learning Periodontal bone loss YOLOv8 Peri-Implant Diseases Artificial-Intelligence Panoramic Radiographs Compromised Teeth Classification Diagnosis |
Publisher: | Springer | Abstract: | Periodontal disease is a significant global oral health problem. Radiographic staging is critical in determining periodontitis severity and treatment requirements. This study aims to automatically stage periodontal bone loss using a deep learning approach using bite-wing images. A total of 1752 bite-wing images were used for the study. Radiological examinations were classified into 4 groups. Healthy (normal), no bone loss; stage I (mild destruction), bone loss in the coronal third (< 15%); stage II (moderate destruction), bone loss is in the coronal third and from 15 to 33% (15-33%); stage III-IV (severe destruction), bone loss extending from the middle third to the apical third with furcation destruction (> 33%). All images were converted to 512 x 400 dimensions using bilinear interpolation. The data was divided into 80% training validation and 20% testing. The classification module of the YOLOv8 deep learning model was used for the artificial intelligence-based classification of the images. Based on four class results, it was trained using fivefold cross-validation after transfer learning and fine tuning. After the training, 20% of test data, which the system had never seen, were analyzed using the artificial intelligence weights obtained in each cross-validation. Training and test results were calculated with average accuracy, precision, recall, and F1-score performance metrics. Test images were analyzed with Eigen-CAM explainability heat maps. In the classification of bite-wing images as healthy, mild destruction, moderate destruction, and severe destruction, training performance results were 86.100% accuracy, 84.790% precision, 82.350% recall, and 84.411% F1-score, and test performance results were 83.446% accuracy, 81.742% precision, 80.883% recall, and 81.090% F1-score. The deep learning model gave successful results in staging periodontal bone loss in bite-wing images. Classification scores were relatively high for normal (no bone loss) and severe bone loss in bite-wing images, as they are more clearly visible than mild and moderate damage. | URI: | https://doi.org/10.1007/s10278-024-01218-3 https://hdl.handle.net/11499/57806 |
ISSN: | 2948-2925 2948-2933 |
Appears in Collections: | PubMed İndeksli Yayınlar Koleksiyonu / PubMed Indexed Publications Collection Teknoloji Fakültesi Koleksiyonu WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection |
Show full item record
CORE Recommender
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.