Teledentistry: Current Applications, Trends, Future Scope, and ProblemsView this Special Issue
A Convolutional Neural Network for Automatic Tooth Numbering in Panoramic Images
Analysis of dental radiographs and images is an important and common part of the diagnostic process in daily clinical practice. During the diagnostic process, the dentist must interpret, among others, tooth numbering. This study is aimed at proposing a convolutional neural network (CNN) that performs this task automatically for panoramic radiographs. A total of 8,000 panoramic images were categorized by two experts with more than three years of experience in general dentistry. The neural network consists of two main layers: object detection and classification, which is the support of the previous one and a transfer learning to improve computing time and precision. A Matterport Mask RCNN was employed in the object detection. A ResNet101 was employed in the classification layer. The neural model achieved a total loss of 6.17% (accuracy of 93.83%). The architecture of the model achieved an accuracy of 99.24% in tooth detection and 93.83% in numbering teeth with different oral health conditions.
Modern dentistry employs computer-assisted procedures in common dental treatments such as surgical planning, postoperative assessment, mechanized dental implants, and orthodontic planning .
The numbering of teeth in dental radiology is a routine evaluation that takes up time. Nowadays, dental images have been used combined with artificial intelligence in many applications such as dental diagnosis and dental treatment [2, 3]. Numbering teeth is required, for example, to identify human dental images, in routine dental procedures, maxillofacial surgical applications, and teeth generic modelling .
A large number of studies have been developed employing deep learning to reduce the workload of professionals and to recognize certain features . Neural networks used for image recognition have evolved over time: initially started using Regions with Convolutional Neural Networks (R-CNNs) for classification tasks and continued with the use of fast R-CNN for classification and detection [6, 7]. Presently, deep learning methods based on convolutional neural networks are being widely used in the field of medical image analysis . This study is employed to detect and number teeth in panoramic images.
The objective of this study was to modify the neural network used in a previous study by the authors , which obtained a precision of 99.24% in detecting the presence or absence of a tooth, to be used in the numbering of teeth in a panoramic image according to the Federation Dentaire Internationale (FDI) teeth numbering system.
2. Materials and Methods
2.1. Study Design
This study used a dataset of anonymized and categorized panoramic dental images. A CNN was first constructed to detect the presence or absence of teeth on the radiography and was later modified to number teeth according to FDI classification. Reporting of this study follows the STARD guideline .
2.2. Image Dataset
Panoramic images were taken from Asisa Dental S.A.U. centers in the Community of Madrid (Spain). These images are completely anonymized by CareStream Health Spain SA (Pozuelo de Alarcón, Madrid, Spain). No additional information such as name, gender, age, or when the image was taken appears in the database. Data collection was ethically approved (Ethics Committee of Research with Regional Medicines of the Community of Madrid (CEIm-R)) on June 15, 2018. The requirement to obtain informed consent from patients was waived by the ethics committee.
The inclusion criteria of the image database employed in the present study were adults older than 18 years. The exclusion criteria of images were edentulous patients, those images with temporary teeth and poor definition, images with removable prostheses, or images with only the presence of implants, computerized axial tomography (CAT), and radiography with overlap or objects out of the imaging plane.
Each image was revised by two examiners with more than three years of experience in general dentistry. The examiners evaluated the image database through a visualization program created to collect the information. The inclusion criteria were panoramic images from adults older than 18. The exclusion criteria were images of edentulous patients, those with temporary teeth, poor definition, with removable prostheses, or with only presence of implants. Computerized axial tomographies (CAT) were also excluded. Radiographies with overlap or objects out of the imaging plane were excluded.
For this study, the 5,121 8-bits images employed in a previous published manuscript by the authors  were used to start the image database in the present study. A set of 2,230 correctly demarcated samples was obtained. Of these 2,230 samples, those with 28 or more teeth were selected. It was possible to identify 1,617 samples with these characteristics, from which those that had metallic parts were filtered, of which 1,217 samples suitable for training and validation of the final FDI detection and assignment model were obtained (Figure 1).
The number of existing teeth in the 2,230 images, distributed by their FDI, is detailed in Table 1. As can be seen, for all quadrants, the number of pieces 1 to 7 is quite homogeneous. However, in the case of piece 8, it is not always categorized by the experts, and there are also fewer cases.
2.3. CNN Architecture
The categorized panoramic radiographs are used as an input for the neural network architecture presented. The system outputs the bounding boxes and the teeth number for all detected teeth on the image.
The algorithms were running backend on TensorFlow version 1.14 and Tensorflow 2.2., and the operating system was Windows 10 and Ubuntu 18.4. In the final step, it was tested in the cloud (AWS) on instance p3.8large (4 GPU’s Tesla V100, 64 GB GPU memory, instance memory: 244 GB, vCores of instance: 32), with the deep learning AMI using the virtual environment of conda tensorflow_p36.
The neural network consists of two main layers: object detection and classification, which is the support of the previous one and transfer learning.
The same Matterport Mask RCNN employed in our previous study was employed  in the object detection (Figure 2).
A ResNet101 was employed in the classification layer (Figure 3). The classification layer was the same as the previous study  although to improve the automatic teeth numbering a new classification level was included (COCO).
To take advantage of the precision in the location obtained in our previous study of tooth detection, it was decided to use transfer learning for this model and thus take advantage of all hyperparameters obtained. This contributed not only to a shorter training time but also to greater precision.
3.1. Training Process
The goal of this study was to see the feasibility of correctly recognizing 32 different FDIs. Therefore, there were 33 classes (32 + background). However, employing the symmetries of the teeth in the quadrants, it was decided to work with 8 class + backgrounds, and later, a postprocess was added.
To train this neural network, 53 workouts were carried out with a minimum of 60 epochs and a maximum of 300 epochs. The duration of each execution was between 3 and 7 hours, depending on the epochs and the learning rates used in each one of them.
For each training/validation group, the learning rate and the number of epochs were varied. The number of epochs in each group varied between 4 and 20, and the learning rate was between 0.012 and 0.0014286. Depending on the chosen combination, and especially on the strategy applied in the selection of the validation group, it was possible to observe how many epochs to use.
3.2. Tooth Numbering Results
The neural model achieved a total loss of 6.17% (accuracy of 93.83%). This result was obtained with the parameters detailed in Table 2.
The evolution graphs of the selected metrics, both training set and the validation set, are shown in Figure 4. As can be seen in the deviation of the validation curves over the training ones in Figure 4, there is no overtraining. The blue line represents the training data’s behavior, and the orange line represents the validation data’s behavior.
3.3. Some Tooth Number Examples without Anomalies
Figure 5 shows the results of tooth numbering of two images without anomalies. Figure 5(a) shows a panoramic image with all teeth and without anomalies with a correct automatic numbering of each of the teeth. Figure 5(b) shows a panoramic image without anomalies but with the absence of two teeth with a correct automatic numbering of each of the teeth.
3.4. Some Tooth Number Examples with Anomalies
Figure 6 details some examples of the results provided by the neural network with some anomalies. Figure 6(a) is an image with 28 teeth with teeth absence detected. In this case, the absence corresponds to tooth number 36 and 46, and the absence of piece 36 is detected, but tooth 46 is not detected, and tooth 46 is numbered as 47. Figure 6(b) shows an example where the absence of 47 is detected, but nevertheless, the part exists. Figure 6(c) is an example in which wisdom teeth are not identified in the 1st and 4th quadrants. In Figure 6(d), tooth number 28 is not detected, and the pontic is considered as one piece.
This study is aimed at building a convolutional neural network to number teeth using panoramic radiographs. A Matterport Mask RCNN, ResNet101, and a transfer learning from this model were employed to achieve the objective of having the best possible accuracy. The architecture of the model achieved an accuracy of 99.24% in tooth detection and 93.83% in numbering teeth.
The neural network employed in this study was first constructed to automatically detect the presence or absence of a tooth with an accuracy of 99.24%, according to a previous author’s manuscript . Therefore, it was modified to add a new task which is tooth numbering employing FDI classification.
Convolutional networks have extensively been applied with very good results in image recognition tasks in several fields as medical image analysis , mainly in tooth detection and numbering in dental radiographs.
Several published studies have analyzed dental images with image-processing algorithms to reach high accuracy in tooth classification. These algorithms employed to classify teeth are Fourier descriptors , textures , Bayesian techniques , among others.
Hosntalab et al.  employed multistage technique to classify teeth in multislice CT (MSCT) images. The algorithm employed had three stages: segmentation, feature, and tooth classification performed by a conventional supervised classifier. A difference with the architecture proposed in this study is that this study has two layers and a transfer learning. The main advantage between both studies is that the classification result in our architecture does not rely on the accuracy of hand-crafted feature extraction algorithms.
Bitewing images are commonly used to number a tooth employing artificial intelligence [5, 14]. Chen et al.  employed a faster R-CNN to number teeth in periapical images. The image database in this case was 1,250 images, and teeth were numbered following the FDI system. The precision of the neural network in detecting the tooth was 98.8%, but the precision in numbering the tooth boils down to 71.5%. As in our study, the precision in tooth detection is higher than in their numbering. However, our proposed network achieves greater precision in both tasks than the one proposed by Chen et al.
Yasa et al.  analyzed 1,125 bitewing images with a faster R-CNN with the goal of identifying and number teeth. The proposed neural network achieved a precision of 0.9293 in tooth numbering.
Tuzoff et al.  employed 1,574 anonymized panoramic radiographs to detect and number teeth according to the FDI notation with a faster R-CNN algorithm. The precision in this case was 99.41% in tooth detection, and a specificity is 0.9994 in teeth numbering.
Yuniarti et al.  used 16 images (6 bitewing and 10 panoramic) to detect and number teeth with a method that achieved an accuracy of 91.6% in detection and 81.5% in numbering.
Sathya and Neelaveni  identify and number teeth in radiographic images with a transfer learning approach using AlexNet with TL. This study achieved an accuracy on molar teeth of 94.16% and 94.06%, in premolars of 93.75% and 94.25%, in canines of 86.5% and 87%, and in incisors of 91.5% and 89.5% in maxilla and mandible, respectively.
Estai et al.  classify permanent teeth on 591 orthopantomogram images employing CNNs and achieved a precision of 0.99.
Bilgir et al.  developed a Faster R-CNN to automatically number teeth on a database of 2,482 panoramic radiographs. This study achieved a precision of 0.9652.
Orhan et al.  employed cone-beam computed tomography (CBCT) images to detect periapical pathosis.
The mains strengths of this study are the number of images analyzed, with a total of 5,121 X-rays, which were categorized by two experts with more than three years of experience in general practice. In this sense, it is important to take into account the concordance between examiners, detailed in the previously published manuscript . In addition, our neural network was trained the model with natural roots, dental implants, filled teeth, endodontic treatments, among others, so most of the clinical situations are included.
Image database contains 8,000 panoramic images with a great variety of health conditions. However, some anomalies have been obtained. For example, some of the images showed the absence of several teeth, and the network correctly identified that those teeth were missing and obtained the correct numbering. However, in other cases, the network detected the absence of a tooth, but the numbering proposal was wrong. On the other hand, the network is capable of correctly numbering teeth that contain metal parts, or any other treatment performed on it such as filled teeth, but in the case of the prosthetic crown, it detects a single tooth. This is due to how the examiners selected these types of situations.
Based on the final accuracy achieved both in detecting and numbering teeth, it is possible to conclude that the convolutional neural network proposed can be used in real clinical practice. The architecture of the model achieved an accuracy of 99.24% in tooth detection and 93.83% in numbering teeth.
The image data used to support the findings of this study have not been made available because of patient privacy.
Conflicts of Interest
The authors declare no conflict of interest.
All authors have read and agreed to the published version of the manuscript. M.P-P. and J.G.V. are responsible for the conceptualization and methodology; M.P-P., J.G.V., and A.B.T. for data curation; A.B.T. for software; M.P-P. for writing—original draft preparation; C.H.M-M. for writing—review and editing; M.P-P., J.G.V., C.H.M-M., and C.I. for visualization; C.I. for supervision; and C.H.M-M. and C.I. for funding acquisition.
This research was funded by the Asisa Dental S.A.U.
F. Pongrácz and Z. Bárdosi, “Dentition planning with image-based occlusion analysis,” International Journal of Computer Assisted Radiology and Surgery, vol. 1, no. 3, pp. 149–156, 2006.View at: Publisher Site | Google Scholar
J. Oliveira and H. Proença, “Caries detection in panoramic dental X-ray images,” in Computational Vision and Medical Image Processing, pp. 175–190, Springer, Dordrecht, 2011.View at: Publisher Site | Google Scholar
P. L. Lin, Y. H. Lai, and P. W. Huang, “An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information,” Pattern Recognition, vol. 43, no. 4, pp. 1380–1392, 2010.View at: Publisher Site | Google Scholar
M. Hosntalab, R. Aghaeizadeh Zoroofi, A. Abbaspour Tehrani-Fard, and G. Shirani, “Classification and numbering of teeth in multi-slice CT images using wavelet-Fourier descriptor,” International Journal of Computer Assisted Radiology and Surgery, vol. 5, no. 3, pp. 237–249, 2010.View at: Publisher Site | Google Scholar
H. Chen, K. Zhang, P. Lyu et al., “A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films,” Scientific Reports, vol. 9, no. 1, p. 3840, 2019.View at: Publisher Site | Google Scholar
A. Krizhevsky, I. Sutskever, and G. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012.View at: Google Scholar
R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, Columbus, OH, USA, 2014.View at: Google Scholar
Z. Li, X. Zhang, H. Müller, and S. Zhang, “Large-scale retrieval for medical image analytics: a comprehensive review,” Medical Image Analysis, vol. 43, pp. 66–84, 2018.View at: Publisher Site | Google Scholar
M. Prados-Privado, J. García Villalón, A. Blázquez Torres, C. H. Martínez-Martínez, and C. Ivorra, “A validation employing convolutional neural network for the radiographic detection of absence or presence of teeth,” Journal of Clinical Medicine, vol. 10, no. 6, p. 1186, 2021.View at: Publisher Site | Google Scholar
P. M. Bossuyt, J. B. Reitsma, D. E. Bruns et al., “STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies,” BMJ, vol. 2015, article h5527, 2015.View at: Publisher Site | Google Scholar
G. Litjens, T. Kooi, B. E. Bejnordi et al., “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017.View at: Publisher Site | Google Scholar
M. H. Mahoor and M. Abdel-Mottaleb, “Classification and numbering of teeth in dental bitewing images,” Pattern Recognition, vol. 38, no. 4, pp. 577–586, 2005.View at: Publisher Site | Google Scholar
A. Ehsani Rad, M. S. Mohd Rahim, and A. Norouzi, “Digital dental X-ray image segmentation and feature extraction,” TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 11, no. 6, 2013.View at: Publisher Site | Google Scholar
Y. Yasa, Ö. Çelik, I. S. Bayrakdar et al., “An artificial intelligence proposal to automatic teeth detection and numbering in dental bite-wing radiographs,” Acta Odontologica Scandinavica, vol. 79, no. 4, pp. 275–281, 2020.View at: Publisher Site | Google Scholar
D. V. Tuzoff, L. N. Tuzova, M. M. Bornstein et al., “Tooth detection and numbering in panoramic radiographs using convolutional neural networks,” Dentomaxillofacial Radiology, vol. 48, no. 4, article 20180051, 2019.View at: Publisher Site | Google Scholar
A. Yuniarti, A. S. Nugroho, B. Amaliah, and A. Z. Arifin, “Classification and numbering of dental radiographs for an automated human identification system,” TELKOMNIKA, vol. 10, no. 1, p. 137, 2012.View at: Publisher Site | Google Scholar
B. Sathya and R. Neelaveni, “Transfer learning based automatic human identification using dental traits- an aid to forensic odontology,” Journal of Forensic and Legal Medicine, vol. 76, article 102066, 2020.View at: Publisher Site | Google Scholar
M. Estai, M. Tennant, D. Gebauer et al., “Deep learning for automated detection and numbering of permanent teeth on panoramic images,” Dentomaxillofacial Radiology, vol. 50, 2021.View at: Publisher Site | Google Scholar
E. Bilgir, İ. Ş. Bayrakdar, Ö. Çelik et al., “An artifıcial ıntelligence approach to automatic tooth detection and numbering in panoramic radiographs,” BMC Medical Imaging, vol. 21, p. 124, 2021.View at: Publisher Site | Google Scholar
K. Orhan, I. S. Bayrakdar, M. Ezhov, A. Kravtsov, and T. Özyürek, “Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans,” International Endodontic Journal, vol. 53, no. 5, pp. 680–689, 2020.View at: Publisher Site | Google Scholar