Abstract

The quality of pharmaceutical products plays an important role in pharmaceutical industry as well as in our lives. Usage of defective tablets can be harmful for patients. In this research we proposed a nondestructive method to identify defective and nondefective tablets using their surface morphology. Three different environmental factors temperature, humidity and moisture are analyzed to evaluate the performance of the proposed method. Multiple textural features are extracted from the surface of the defective and nondefective tablets. These textural features are gray level cooccurrence matrix, run length matrix, histogram, autoregressive model and HAAR wavelet. Total textural features extracted from images are 281. We performed an analysis on all those 281, top 15, and top 2 features. Top 15 features are extracted using three different feature reduction techniques: chi-square, gain ratio and relief-F. In this research we have used three different classifiers: support vector machine, -nearest neighbors and naïve Bayes to calculate the accuracies against proposed method using two experiments, that is, leave-one-out cross-validation technique and train test models. We tested each classifier against all selected features and then performed the comparison of their results. The experimental work resulted in that in most of the cases SVM performed better than the other two classifiers.

1. Introduction

Pharmaceutical drugs are chemical compounds that can be used to preclude and cure patients from different kinds of diseases. In today’s fast moving era, the advancements in the field of pharmacology help doctors to save lives of people by curing them. Tablets are the most common form of medicines prescribed by physicians to the patients. U.S. FDA (Food and Drug Administration) is responsible for approving the medicines before sending to the local market by their manufacturers. FDA allows only those medicines to sell in the market that are safe and fulfil all the quality metrics defined by them.

When these medicines are supplied to the local pharmacies even after approval by the FDA there are still many chances that the medicines are substandard. Substandard medicines are those that somehow do not fulfill the quality standards and are harmful for the patient’s health. They can be categorized as counterfeit, expired and environment-affected medicines.

Environment-affected medicines are those which conform to the standards at the time of manufacturing but with the passage of time different external factors change them into the category of substandard medicines. These factors include moisture, light (especially sunlight), extreme temperature, and oxygen. As discussed by Islam et al. [1], moisture affects the physical and chemical stability of the drugs by accelerating the hydrolysis and reacting with the excipients. In another research Szakonyi and Zelkó states [2] water absorption in the surface of a tablet results in degradation of its active pharmaceutical ingredients (APIs). The use of defective tablets may cause some minor issues in the patient’s body like allergies or may result in their death. So there is an immense need of such a method that can identify environmental affected medicines after their manufacturing.

In this research we are dealing with three environmental factors, that is, humidity, moisture, and temperature. Humidity means the amount of water vapor available in the air. The APIs of the pharmaceutical tablets indicate reaction with humidity if they left in open air which results in oxidation and reduction processes. The second factor we are dealing in this research is moisture. The term moisture is related to the contents of water in liquid state. The stability of the tablets strongly depends on the amount of water present in them. The increase in the amount of moisture above its actual need can cause reactions of APIs and excipients as discussed in [1]. Similarly temperature is the third environmental factor dealt with in this study. Temperature changes the potency of tablets and results in unpredictable behavior.

Different techniques are available in literature for the assessment and estimation of formulation, quality, correctness, and stability of the solid drugs. Some of these techniques are used at the time of manufacturing to get information about the correct amount of APIs. TLC (thin layer chromatography) and HPLC (high-performance liquid chromatography) are traditional techniques that are used for this purpose. Deisingh [3] uses TLC for the estimation and identification of counterfeit medicines or the APIs from the tablets. Both of these techniques are slow, expensive, and destructive [4].

As discussed in some other researches [3, 58], solid drug assessment techniques can also be categorized as spectrum based assessment (SBA) techniques. These include mass spectrometry (MS), nuclear magnetic resonance spectroscopy (NMR), X-ray diffraction (XRD), scanning electron microscopy (SEM), and vibrational spectroscopic (VS) techniques. VS include Raman and near-infrared spectroscopy techniques. Different researches [911] explain that all of these require either full or some amount of sample preparation so they are either destructive or semidestructive except that of the VS technique.

Spectral image based assessment (SIBA) techniques are another type that can be used for the analysis of solid form of dosages. SIBA involves two major techniques known as multispectral imaging (MSI) and hyperspectral imaging (HSI). Hamilton and Lodder [12] use HSI for the analysis of pharmaceutical medicines to compare the performance of HSI over HPLC and conclude that HSI is more accurate. In another research, Gowen et al. [13] performed nondestructive assessment of the pharmaceutical tablets using VS along with various image processing (IP) techniques. The image created from the combination of digital imaging with either Raman spectroscopy or near-infrared spectroscopy are known as chemical image. Chemical imaging is used by Šašić [14] for the analysis of pharmaceutical raw ingredients. From different researches [1517] it is found that chemical imaging can also be used to monitor the development process and quality control of the pharmaceutical tablets. Puchert et al. [18] uses near-infrared chemical imaging (NIRCI) for the identification of counterfeit medicines. Extensive comparative studies of all these techniques are available in [5, 19].

Image based assessment (IBA) is also used for the analysis and classification of the tablets. IBA is a nondestructive, less expensive, and simple approach based on different IP techniques like image enhancement, segmentation, edge, contour detection and texture analysis, and so forth. Segmentation of grayscale tablet images using adaptive thresholding and morphological operations is used for the tablet identification which is also known as pill recognition. Andreas et al. in his researches [20, 21] performed classification using Euclidean distance on a feature set based on size, shape, and color, and the results describe that the most dominant feature from these three is “size.” Ramya et al. [22] used template matching along with a series of IP techniques to detect broken tablets from blister packaging. Špiclin et al. [23] performed inspection of imprinted tablets using image registration on an image database of different defective and nondefective tablets. They used three registration methods in this research: direct matching of pixel intensities, principal axis matching, and circular profile matching. Comparative analysis shows that circular profile matching is a more powerful registration technique of visual inspection of the tablets. In another research comparison of geometrical and statistical methods for visual inspection of tablets was performed using receiver operating characteristics analysis. Geometrical features are based on imprinted shape while on the other hand statistical features are based on tablet surface statistics. The proposed inspection method by Bukovec [24] can identity five types of defects that are spot, deboss, emboss, crack, and dot. Results show that the features extracted from the statistical methods are better than the geometrical methods for the tablet inspection.

In this research we are focusing on the IBA of the tablet surface morphology using textural features. The proposed methodology helps in classification of the solid tablets into two different categories, defective tablets (DT) and nondefective tablets (NDT). The research aims at formulating a new nondestructive method based on the surface analysis of tablets for their abovesaid classification. In the rest of the paper Section 2 provides the material and methods, Section 3 describes results and discussions, and Section 4 concludes the paper.

2. Material and Methods

2.1. Image Acquisition

To perform the experimentation of the proposed methodology nine different datasets are created. Each dataset comprises the images of defective and nondefective versions of ten different tablets. These images are captured using Labomed 5 MP digital camera mounted on Nikon Eclipse LV100 microscope with a resolution of 2580 × 1944. We considered three major environmental factors, that is, temperature, moisture, and humidity, for the creation of defective tablets.

Three datasets are created for the tablets affected by temperature and labeled as T1, T2, and T3. T1 consists of images of the tablets which are placed in an area having 200°C temperature for five minutes and their nondefective versions. Similarly T2 and T3 contain images of defective and nondefective tablets placed in 240°C and 280°C for five minutes, respectively. In the same way three datasets are created for humidity factor labeled as A1, A2, and A3. Defective tablets in A1 are placed out of their packaging (in open air) for three days; similarly, A2 and A3 contain images of the tablets that remain out of their packaging for two days and one day, respectively. Another three datasets are created for the tablet images affected by moisture. Moisture affected tablet images were captured after affecting four tablets at day 1, four at day 2, and four at day 3 by different levels of moisture (liquid water) which they were exposed to and these datasets of the tablets are referred to as W1, W2, and W3, respectively. A brief description of datasets is given in Table 1.

Figure 1 shows some of the images of the datasets used in this research. In each part of Figure 1 the first four images are of environment-affected tablets and the last four are of their nondefective versions. Figures 1(a), 1(b), and 1(c) parts show tablet images of datasets A1, A2, and A3 which are affected by humidity. Similarly Figures 1(d), 1(e), and 1(f) display tablets affected by temperature and labeled as T1, T2, and T3. Figures 1(g), 1(h) and 1(i) represent tablets of datasets W1, W2, and W3, respectively. Each of these three datasets belongs to moisture affected tablets.

2.2. Proposed Methodology

In this research our main focus is on analysis based on surface morphology of the solid dosage forms (tablets) using IP and ML (machine learning). Surface of a tablet can effectively represent its characteristics. In proposed methodology we are using tablet surface images for the classification between DT and NDT. The proposed methodology mainly consists of four phases: preprocessing, feature extraction, feature reduction, and classification. The main flow of the proposed approach is shown in Figure 2.

In the first phase, input images are prepared for further analysis. The preprocessed images are then passed to feature extraction phase to extract different textural features which will be stored as feature vector (FV). In the next phase, feature reduction techniques are applied on the FV to reduce its dimensionality. The last phase classifies the images into DT and NDT based on the selected features. Details of the proposed methodology are shown in Figure 3.

2.2.1. Preprocessing

Preprocessing consists of algorithms that can be used for image enhancement and noise removal. After image acquisition, preprocessing is an essential step to prepare the captured images for the feature extraction. Preprocessing is performed in two steps, that is, grayscale conversion and image enhancement.

(1) Grayscale Conversion. Texture analysis is used in different machine vision problems such as surface inspection and classification. We can define texture as the spatial distribution of different gray levels in a neighborhood. To perform textural analysis it is important to convert color image into grayscale image.

(2) Contrast Enhancement. Image enhancement is important to improve the quality of the input image. The enhancement technique used in the proposed methodology is contrast enhancement. In proposed methodology the increase in image contrast is performed using the formula given in [25] which is based on saturating 1% of the data at high and low gray intensity values of the input image.

Contrast enhancement formula is as follows: where is contrast enhancement at pixel , is image intensity at a particular index , is high intensity of the image, and is low intensity of the image.

2.2.2. Feature Extraction

After applying preprocessing on the input image, we need to perform feature extraction to quantify surface of the image through different parameters. Analysis of the surface of the tablets through its texture can provide great help in classifying them into correct and damaged tablets. Texture of a surface can be defined using different types of features which can be extracted from the gray level distribution of the image intensity. Statistical feature extraction methods are extensively used for the texture analysis. Different textural feature used in this study are gray level cooccurrence matrix (GLCM), histogram, run length matrix (RLM), autoregressive model (ARM), and HAAR wavelet features. So total of 281 textural features are extracted from each of the preprocessed images using MaZda (texture analysis software) designed by Szczypiński et al. [26]. The creation of th dataset is shown in (2); here the value of is from 1 to 9.

Formula for dataset representation is as follows: where   is the th input image, is the feature set of the th image, is the total number of images for each dataset.

Detail of these features is given below.

(1) Gray Level Cooccurrence Matrix (GLCM). GLCM is one of the statistical feature extraction methods which can be used to define texture of a surface. It is based on spatial relationship between pixels. Texture characterization can be performed by calculating how often pairs of pixels with specific values and in a specified spatial relationship occur in an image. MaZda provides eleven features extracted from GLCM. These are angular second moment, contrast, correlation, sum of squares, inverse difference moment, sum average, sum variance, sum entropy, entropy, difference variance, and difference entropy. In this research we have computed GLCM features for 5 between-pixel distances (1, 2, 3, 4, and 5). So total of 220 features are extracted.

(2) Histogram Features. Histogram features are first-order statistics based features, which are used to represent surface texture. According to Srinivasan and Shobha [27], histogram based features represent intensity concentration on all parts of the image. MaZda provides total of nine histogram features from which we have chosen four: mean (histogram’s mean), variance (histogram’s variance), skewness (histogram’s skewness), and kurtosis (histogram’s kurtosis).

(3) Run Length Matrix (RLM). Run length from an input gray level image is defined by a set of consecutive, collinear pixels having same gray level. Coarseness of a texture in a specific direction can be captured using RLM [28]. MaZda provides total 20 features extracted for RLM. These features are run length nonuniformity, grey level nonuniformity, long run emphasis, short run emphasis, and fraction of image in runs. Each feature is computed in four different directions (horizontally, vertically, 45 degree, and 135 degree).

(4) Autoregressive Model Features (ARM). MaZda provides 5 different features based on autoregressive model. These are theta 1 (parameter ), theta 2 (parameter ), theta 3 (parameter ), theta 4 (parameter ), and sigma (parameter ).

(5) HAAR Wavelet Features. Wavelet energy feature is measured at 8 scales using four bands of frequency (LL, LH, HL, and HH) using MaZda. This provides total of 32 features.

2.2.3. Feature Reduction

The feature extraction phase results in 281 different features which are very hard to deal with. So for better results it is important to reduce the dimensionality of the feature set. Three different feature reduction techniques are used in this research for extracting the most promising features which can lead us towards the correct classification between DT and NDT. These three techniques are chi-square (CS), gain ratio (GR), and relief-F (RF). Feature reduction is performed by extracting top 15 features out of complete feature vector for each of the above three techniques. Feature reduction was performed using WEKA developed by Hall et al. [29]. All of these feature selection algorithms are used along with Ranker search algorithm. It is observed that top 15 features extracted from both GR and RF for our dataset are the same. The top 15 features extracted from CS, GR, and RF are listed in Table 2. The detail of these feature selection techniques is discussed below.

(1) Chi-Square. Chi-square (CS) feature selection algorithm performs ranking of features by calculating chi-squared statistic for each class. CS calculates the degree of the dependency between attributes and a specific class. According to Chatcharaporn et al. [30], consider the formula for CS.

Formula for chi-square is as follows: where and is the observed and expected frequencies, respectively.

(2) Gain Ratio. Gain ratio (GR) ranks the attributes by compensating the bias for information gain (IG). According to Chatcharaporn et al. [30] GR can be measured by the following.

Formula for gain ratio is as follows: where is entropy of . The result of the GR is always in . means that can completely predict , where is the variable to be predicted, and indicates no relation between and .

(3) Relief-F. Another statistical attribute selection technique used in this research is relief-F (RF). RF calculates weight for each feature using relationship between a feature and a specific class to rank it [30]. This weight calculation is based on two types of nearest neighbor probabilities. The first probability is calculated through two different classes with different feature values and the other probability of weight computation is based on the same class of two nearest neighbors with the same feature value [31].

The top 15 selected features from a total of 281 using CS, GR, and RF are given in Table 2. It can be seen from Table 2 that according to the CS among the top 15 features 14 are related to angular second moment (AngScMom) from multiple distances and one is inverse difference moment (InvDfMom) at distance 4. All 15 features extracted from CS are related to GLCM. On the other hand 14 features selected from GR are related to GLCM and one is wavelet energy from HAAR wavelet features.

2.2.4. Classification

The evaluation of the features extracted from the tablet images is performed using three different types of classification algorithms, that is, SVM, KNN, and NB. In this research we have performed a comparison between the accuracies achieved from these classifiers. All experimental work for this research is performed using MATLAB. Classification is performed using all 281, the top 15 features selected using abovementioned feature reduction algorithms, and the top two from overall 281 features.

(1) Naïve Bayes. Naïve Bayes is a statistical learning algorithm that performs probabilistic classification based on Bayesian networks [32]. Naïve Bayes performs training by estimating prior and conditional probabilities from the dataset. Prior probability for a specific class is calculated by dividing the count of training examples falling in that class by total number of examples. On the other hand conditional probabilities are based on the frequency distribution of feature from the training data that belong to that specific class [31]. NB is implemented using MATLAB for the experimentation. Some important studies related to drugs using naïve Bayes as a classifier are [3336].

(2) -Nearest Neighbor (KNN). -nearest neighbor (KNN) is a simple but robust algorithm that can efficiently deal with complex problems of classification. It is based on multiple parameters like how many nearest neighbors must be considered while classification and, denoted by , distance of features within a dataset to determine which data belong to which group. In the proposed methodology we have implemented KNN using MATLAB with a value of 2 and cosine as a distance metric.

(3) Support Vector Machine (SVM). SVM uses linear equation built from the training data for partitioning the dataset. SVM works in two steps: mapping of nonlinear data from input space to feature space is performed in the first step and then similarity of the feature vectors is measured using kernel function. It can handle large feature sets with high accuracy [30]. SVM is implemented using MATLAB. Training of the datasets is performed using linear kernel function with sequential minimal optimization (SMO) method for separating hyperplanes. Hou et al. [37] have used SVM models for the recognition of SH3 domain-peptide.

3. Results and Discussion

In this research we have evaluated the accuracy of the proposed methodology using two different experiments. In Experiment I, leave-one-out (LOO) cross-validation method is used for the evaluation of the proposed approach. LOO cross-validation is firstly applied on each individual dataset, then on combined datasets of each environmental factor, and lastly over a combined dataset of all environmental factors. In Experiment II the accuracy of the proposed method is evaluated by using separate training and testing datasets. Each dataset is divided into two halves; so 50% of the data is used for training the proposed method and 50% of the remaining data is used for the testing of the proposed method. Classification accuracy of the proposed methodology is measured using three different types of classifiers (SVM, KNN, and NB). Feature vector is formed using a total of 281 texture based features extracted from the preprocessed images.

In Experiment I, first of all, we have used the whole 281 features as feature vector and evaluated the performance of the proposed methodology using all three classifiers based on LOO cross-validation. This classification is performed on each tablet dataset individually and then on combined datasets. Table 3 contains the results of this experiment.

A graphical representation of the accuracy of each classifier is shown in Figure 4. Results show that maximum accuracy is achieved by using SVM classifier for most of the datasets. Classification accuracies against moisture affected tablets are higher than the other two factors. From humidity affected tablet datasets, it can be seen that the humidity affects the surface of the solid tablets very slowly that is why they have low classification rate. Same results are reflected by the accuracies of the combined datasets.

From Table 3 to Table 8, “Acc” is for accuracy, “Sn” for sensitivity, and “Sp” for specificity.

After that LOO cross-validation is applied on the selected top 15 features. Classification accuracies are calculated again using three classifiers against the top 15 selected features and it can be seen from results that features extracted from CS provides higher accuracies compared to GR. The comparison of results using the top 15 features is shown in Table 4. Overall SVM and ANN provide higher accuracies using CS for the classification for all individual tablet datasets. SVM provides maximum 90.32% accuracy for W1 dataset using CS while ANN provides 90.91% accuracy for W3 using GR. Again from results it can be highlighted that moisture affected tablets have higher classification rate.

In case of combined datasets of tablets, the maximum accuracies achieved for moisture affected tablets and the lowest against humidity affected. In case of whole combined datasets maximum 86.30% accuracy was achieved using ANN classifier. Figure 5 shows the accuracies of individual and combined tablet datasets.

At the end of Experiment I, we have evaluated the accuracy of the proposed method against the top two features selected from 281 features. These two features are selected by making combinations of two from 281 features and then selecting a pair of features providing maximum accuracy. The top two selected features are “S (5, 0) Entropy” (entropy at distance 5) and “Horzl_GLevNonU” (horizontal gray level nonuniformity). Entropy measure is from GLCM and Horzl_GlevNonU is from RLM.

Table 5 shows the accuracies of individual and combined datasets. LOO cross-validation using the top two features again provides maximum classification rates for moisture affected datasets through SVM. In case of combined dataset NB provides maximum classification accuracy, that is, 91.10%, but with low sensitivity rate, that is, 29.41%. This is depicted in Figure 6.

Similarly, in Experiment II, we have evaluated the accuracies of the proposed methodology through train and test model against all 281, selected top 15, and the top two features. All accuracies in this experiment are calculated by providing the test datasets to a trained model.

Table 6 shows the test results against all 281 features. In case of overall combined dataset, 86.30% accuracy and for combined humidity dataset 86.67% accuracy were achieved through SVM. Against individual datasets like temperature and moisture NB provides more accurate results. NB provides 93.75% accuracy for W1 (with 100% sensitivity and 85.71% specificity) and provides for T2 87.50%. In case of A1, SVM provides maximum accuracy, that is, 78.57%. Figure 7 shows the results in graphical form.

The test results against the selected top 15 features are shown in Table 7. The features selected from CS outperform than GR in most of the cases. NB provides relatively low accuracies than SVM and ANN. In case of tablets affected by humidity and temperature ANN provides better accuracies but, for moisture affected tablets, SVM is better. When the trained model is tested on combined datasets, maximum 91.18% accuracy is achieved against moisture affected tablets datasets. The graphical representation of these results is shown in Figure 8.

The accuracies against the top two selected features using test datasets are provided in Table 8. It can be seen from results that for almost all of the datasets SVM is better except for humidity. In case of humidity affected datasets ANN provides better results. For W3, SVM provides 88.24% accuracy with 88.89% sensitivity and 87.5% specificity. In case of overall combined dataset NB provides maximum accuracy, that is, 90.14%, with 98.48% specificity. These results are also shown in Figure 9.

4. Conclusion

In this research we have proposed a new methodology for the classification of defective and nondefective tablets using image processing and machine learning techniques. In proposed approach we have used textural features extracted from the surface of the preprocessed images. Whole analysis is performed on nondefective and defective tablets. The surfaces of defective tablets are affected by three environmental factors, that is, temperature, humidity, and moisture. Comparison analysis is performed using all 281, the top 15 (extracted using CS, GR, and RF), and the top 2 features. Classification is performed using SVM, KNN, and NB classifiers. Analysis shows that higher accuracies are achieved on moisture affected tablets as moisture has quick reaction with the APIs of the tablet. In different types of experiments, the proposed methodology using SVM for most of the datasets is better than the other two classifiers. In future the combination of spatial and spectral data of the tablets can be used to achieve higher accuracies.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.