Abstract

Aim. This study applied a CNN (convolutional neural network) algorithm to detect prosthetic restorations on panoramic radiographs and to automatically detect these restorations using deep learning systems. Materials and Methods. This study collected a total of 5126 panoramic radiographs of adult patients. During model training, .bmp, .jpeg, and .png files for images and .txt files containing five different types of information are required for the labels. Herein, 10% of panoramic radiographs were used as a test dataset. Owing to labeling, 2988 crowns and 2969 bridges were formed in the dataset. Results. The mAP and mAR values were obtained when the confidence threshold was set at 0.1. TP, FP, FN, precision, recall, and F1 score values were obtained when the confidence threshold was 0.25. The YOLOv4 model demonstrated that accurate results could be obtained quickly. Bridge results were found to be more successful than crown results. Conclusion. The detection of prosthetic restorations with artificial intelligence on panoramic radiography, which is widely preferred in clinical applications, provides convenience to physicians in terms of diagnosis and time management.

1. Introduction

In dentistry, a correct diagnosis leads to a correct treatment plan. Prosthodontics is a complex branch of dentistry that offers the best diagnostic and treatment options. The long-term success of prosthetic restoration is achieved with an accurate diagnosis and treatment plan [1].

Panoramic imaging is frequently used in dental practice by clinicians to screen teeth and maxillofacial structures in a single image. Using these images, teeth can be evaluated and clinicians can plan the patient’s prosthetic rehabilitation. Although panoramic radiography (PR) is a quick and easy application with a low radiation dose, interpreting radiographic images can be challenging owing to superimposition, distortions, and potential artifacts [24].

Artificial intelligence (AI) typically requires human intelligence or computers to perform tasks. This can be defined as the acquisition and learning of information using machines. AI recognizes speech, makes decisions, and makes medical diagnoses. It detects anomalies in images that the expert eye cannot detect and identifies problems that cannot be solved by humans [5].

Recently, AI and deep learning have been evolving. Deep learning is an AI-based approach that makes automated decisions. In addition, deep learning is a set of computational models with multiple layers of data processing, which can learn by representing these data through several levels of abstraction. Convolutional neural networks (CNNs) are considered a special case of DL, and they are used in image processing and analyzing radiological datasets [6].

Some studies in dentistry applied deep learning to diagnose and make decisions [79]. Deep learning detects caries and periapical lesions based on periapical and panoramic images [1012]. Applying deep learning to distinguish prosthetic restorations in PR could save more clinical time for clinicians to focus on treatment planning and prosthodontic operations. Additionally, examination results (such as restorations, crowns, and bridge prostheses) can be classified using AI technology. Furthermore, AI can be used in undergraduate dental education. AI technology provides the opportunity to support dentistry students in prosthetic clinical internships and auxiliary staff who are less experienced.

Although numerous studies on the automated detection of caries and root canal treatments using AI have been conducted, the use of AI technology in prosthodontics is limited [13]. This study proposes a prosthodontic restoration analysis model based on CNN with transfer learning. This model analyzes prosthodontics using PR and provides clinicians with a more accurate treatment plan. This study improves detection accuracy and reduces the workload of dentists, who can thus focus more on treatment planning.

2. Materials and Methods

2.1. Dataset

This study collected a total of 5126 randomly chosen PRs of adult patients from Istanbul Medipol University, Faculty of Dentistry. The entire dataset, consisting of adult panoramic X-rays, was obtained from the databases of this hospital; sex and age were not considered when creating the dataset. The data obtained were anonymized and stored to protect confidentiality. The aim was to ensure diversity in the dataset and render the model generalizable as each X-ray was obtained from this hospital. Approval for this study was obtained from the institutional ethical committee (with 16/3 decision number (2019/672)).

During model training, .bmp, .jpeg, and .png files for images and .txt files with five different types of information (class_id, x_center, y_center, width, and height) are required for the labels [14]. LabelImg [15] tool was used during the labeling process, the coordinates of the crown/bridge area on each image were determined by drawing a rectangle on the image, and the files were created such that the labels were ready for training. Here, the tagged coordinate information was normalized using the width and height information of the image.

A Planmeca Promax 2D Panoramic system (Planmeca, Helsinki, Finland) at 68 kVp, 14 mA, and 12 s was used to obtain all PRs. Each crown or bridge in the maxillae and mandibles was manually annotated using the labeling of the bounding box in the LabelImg program. The crown (Figure 1(a)) and bridge (Figure 1(b)) locations were detected by drawing a bounding box, and all crowns and bridges were labeled. Each label was made by trainee dentists and subsequently verified by specialist dentists to confirm their accuracy. Herein, 10% PRs were used as test dataset. The dataset contained 2988 crowns and 2969 bridges owing to labeling, and these were used as the ground truth data for training and testing.

2.2. YOLOv4 Architecture

YOLO was proposed as an end-to-end neural network in 2015 [16]. With this architecture, the images are entirely handled, and the images are divided into S × S grids during training. YOLO makes predictions for each small area resulting from the division operation and returns probabilities as the output.

With over 64 million parameters, YOLOv4 is among the state-of-the-art architectures owing to high precision and real-time operating performance.

Training was conducted on a server with an Nvidia RTX2080 Ti (11 GB RAM) graphics card and 192 GB RAM. The parameters used in training for 30 epochs are listed in Table 1.

The model was developed and trained using the PyTorch deep learning framework in Python. Datasets of the radiology images were randomly divided into training and testing sets. Of the 5126 PRs, 4605 images were used for training, and the remaining 521 images were used for testing. During model training (Figure 2), a transfer learning study was conducted using preweights that had previously won the 2017 COCO competition [17]. Although the dataset was separated as a training set at 90-10%, each image was resized to 608 × 608 pixels and used during model training and testing (Figure 3).

2.3. Performance Metrics

To examine the performance of the model, the complexity matrix (Table 2), which is crucial in image processing problems, and the precision (equation (1)), recall (equation (2)), mean average precision (equation (3)), mean average recall (equation (4)), and F1 score (equation (5)) metrics obtained from the complexity matrix were considered.

Mean average precision (mAP):

Mean average recall (mAR):where n: number of samples.

3. Results

Table 3 presents values obtained from the test dataset of the trained model and test data. Additionally, it presents the mAP and mAR values obtained when the confidence threshold was set to 0.1and the TP, FP, FN, precision, recall, and F1 score values obtained when the confidence threshold was 0.25. Based on these values, the bridge results appeared to be more successful than the crown results.

Figure 4 provides sample crown detection from the test set images, together with the ground truth label. In this image, a crown was correctly detected and the confidence score of detection was extremely high.

Figure 5 provides sample bridge detection from the test set images, together with the ground truth labels. In this image, all bridges were correctly detected and the confidence scores of detection were extremely high.

The YOLOv4 inference time was 90 ms on average. Hence, the model could process approximately 11 images per second. Together with 73.12 and 89.18% AP values and close-to-real-time inference speed, the YOLOv4 model demonstrated that accurate results could be obtained more quickly compared with other CNN-based object detection models. Moreover, the precision and recall curves, high precision, and high recall values all indicate that this study was statistically powerful.

4. Discussion

Dental panoramic radiographs are widely used for diagnosis in dentistry because of their relatively low dose and cost [17, 18]. However, studies on restoration are limited. In addition, owing to multiple superimpositions and distortions, challenging interpretations can cause misdiagnosis [2, 3].

AI technologies can deal with complex cases with multiple variables; therefore, the application of AI in prosthodontics is of high interest. AI algorithms promote evidence-based decision making in treatment plans, particularly for less experienced clinicians. In addition, AI technology provides an opportunity to easily analyze patient cases [13]. This study used the CNN algorithm YOLOv4, a real-time detection system that classifies targeted objects in a single pass.

The trained AI can appropriately distinguish between teeth and restorations. In this study, detection accuracies with the F1 score were 0.76 for the crown and 0.89 for bridge restoration. The difference in the success of detecting crown and bridge restorations may be due to the width of the bridge prosthesis and the effect of multiple teeth. Crown restorations are more difficult to distinguish on PR than bridge prostheses. A comparison of the findings obtained by other studies that evaluated the detection of crowns using AI on PRs and those obtained by our study revealed discrepancies. Basaran et al. [19] used 1084 images in their study; however, PRs with artifacts caused by superposition were not included. In their study, the F1 score was found to be 0.91 for the crown and 0.82 for the pontic. Similarly, in the study by Vinayahalingam et al. [20], 2000 images were randomly collected, and then blurred and incomplete PRs were excluded. Their F1 score was 0.951. The high values obtained in these studies may be related to the exclusion of radiographic images with artifacts and/or blurring. Abdalla-Aslan et al. [21] reported accuracy values of 100% for crown restorations in their pilot study. The high accuracy of the crown can be attributed to the analysis of only 83 panoramic images. Compared with previous studies, the width of our dataset and the inclusion of all data increase the reliability of the results obtained in this study.

Compared with studies regarding crown detection, F1 scores and sensitivity values were extremely low. The fact that the success of AI in detecting crowns and bridges in our study was lower than those in other studies may be related to the fact that the radiographs used in our study were taken with different devices. In addition, in our study, clean data were not preferred to show the applicability and adaptability of the program to all X-rays. Moreover, only repetitive images were removed and all standard data were used.

Apart from studies on PRs, some studies have measured the success of fixation on intraoral images [22, 23]. Engels et al. [22] aimed to detect and categorize dental restorations. The diagnostic accuracy was 97.8% for ceramic restorations and 99.4 for gold restorations in studies in which 1761 images were used. Similarly, Takahashi et al. [23] aimed to recognize dental prostheses using a deep learning object detection method. They reported that the system could detect silver-colored complete metal crowns, gold-colored complete metal crowns, resin-facing metal crowns, porcelain-fused-to-metal crowns, and ceramic crowns. This study used approximately 1900 images; according to the authors, this number was insufficient for recognizing all types of prostheses used in dental clinical procedures. When evaluating the results, dental prostheses with metallic color can be recognized with an AP of over 0.80, but those with tooth color are recognized with an AP of approximately 0.60 from oral photographic images. Therefore, the combined use of both intraoral images and PR may be required to decrease misdetection.

Detection of prosthetic restorations with automated computer processing provides objective information to patients; in this manner, patients will be motivated to receive dental treatment, confident about the diagnosis of dentists, and less worried about dental operations [24]. Furthermore, AI technology may improve patient management and patient-clinician relationships [21]. AI-based systems may be useful tools for relieving the workload of dentists and auxiliary staff members. In addition, it can save clinicians’ time and assist them in deciding appropriate treatment plans. AI can also be used in student education. This would improve the student's ability to read and detect dental radiographic images.

Considering these encouraging results, we propose that crowns and bridge restorations can be detected. With the development of additional software, the marginal fit of these prostheses can be examined. Owing to this AI program, the incidence of crown and bridge prostheses in patients can be compared. Future studies should be conducted in which teeth with crown and bridge prostheses and teeth without any restoration are comparable in terms of caries, periapical lesions, or the need for root canal treatment.

In conclusion, AI technologies in prosthodontics are useful in several ways. Prosthetic restorations were detected with high accuracy using the deep learning method. If panoramic images are used with oral photographic images obtained using intraoral scanners, better results can be obtained in terms of the detection of prosthetic restorations in clinics.

Data Availability

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare that they have no conflicts of interest.