Abstract

Alzheimer’s disease is the neuro disorder which characterized by means of Amyloid– (A ) in brain. However, accurate detection of this disease is a challenging task since the pathological issues of brain are complex in identification. In this paper, the changes associated with the retinal imaging for Alzheimer’s disease are classified into two classes such as wild-type (WT) and transgenic mice model (TMM). For testing, optical coherence tomography (OCT) images are used to classify into two groups. The classification is implemented by support vector machines with the optimum kernel selection using a genetic algorithm. Among several kernel functions of SVM, the radial basis kernel function provides the better classification result. In order to deal with an effective classification using SVM, texture features of retinal images are extracted and selected. The overall accuracy reached 92% and 91% of precision for the classification of transgenic mice.

1. Introduction

The most common form of disability is neurodegenerative disease [1, 2]. Because Alzheimer’s disease has such a long development period, patients can benefit from frequent testing and receive early treatment. However, due to their high cost and limited choice, current clinical diagnostic imaging techniques do not match the specific needs of screening methods [3, 4]. We made it a priority in this study to assess the retinal, particularly the retinal vasculature, as a potential solution for performing dementia assessments in Alzheimer’s chronic conditions. Inflammatory alterations may begin 20+ years before neurological dysfunction manifests, and though the time neurotoxic effects manifest, cerebral deterioration has so far gradually extended. The Alzheimer’s Society, the National Institute of Health, and thus the Global Advisory Committee on AD have suggested a study paradigm given a set of confirmed indicators connected towards both kinds of abnormalities that are proxies for AD to identify AD in actual persons [57]. All across the process, flexible scalable neural nets were used. The process obtained an overall accuracy rate of 82.44 percent using data from either the UK Biobank. It included a saliency analysis of this pipeline’s understandability in addition to a high classifier shown in Figure 1. The detection of transgenic mice is carried out from the input fundus image, but the existing approaches possess a higher false detection rate which degrades the accuracy of the system. Additionally, the following problems are faced in optimal detection of classes which are listed as(i)Difficulty in feature differentiation: the detection of transgenic mice is based on various features such as texture, color, and intensity, but the differentiation of these minute features from each other is a hard task that degrades the computation of accurate diseases(ii)Class overlapping: the class of input image is also determined by the existing approaches, but the limited set of training data of each severity results in a class imbalance problem affecting the accuracy of classification(iii)Improper preprocessing: the execution of conventional proper preprocessing and effective enhancement of contrast techniques by the existing approaches results in difficulty in identifying the features from the background

The major objective of this study is to provide precise classification between the WT and TMM and to compute the accuracy of the diseases in an accurate manner. This objective is achieved by fulfilling the subobjectives which are listed as follows:(i)To minimize the level of artifacts in the input image by performing effective preprocessing of the image(ii)To maximize the precise identification of features from the preprocessed image by performing enhancement of contrast level(iii)To effective classify the images into two classes based on the extraction of significant features(iv)To determine the features related to the disease based on the variation in the intensity of the features for the purpose of diagnosis

Under [8] article, authors investigated alterations in optic disc linked with Alzheimer’s disease using the retinal as a window into the central and peripheral nervous system. Optical coherence tomography would be used to analyse the retinas of transgenic mice models (TMM) and wild-type (WT) of Alzheimer’s disease, and support vector machines with the radial basis function kernel were used to categorize the cells in the retina into TMM and WT classes. At the age of four months, predictions were over 80% accurate, and at the age of eight months, they were over 90% accurate. In line with the results, feature extraction of generated fundus images acquired shows a much more diverse retinal architecture in mouse models at the age of eight.

Utilizing coregistered angle-resolved [9] low-coherence interferometry (a/LCI) and optical coherence tomography, we obtained insight light scattering data from the retinas of triple transgenic Alzheimer’s disease (3xTg-AD) mice and wild-type (WT) age-matched controls (OCT). Visual guiding and segmentation depths supplied by cross OCT B-scans were used to obtain perspective dispersion data from the peripheral nerve layer, outer papillary overlay, and endodermal epithelial. When comparing vivo mouse cells in the retina to WT controls, OCT imaging revealed a substantial weakening of the nerve fibre layer. The a/LCI scattering measures offered additional information which helps to differentiate AD mice by quantifying tissue heterogeneity. While compared to the WT mice, the AD mice’s eyes demonstrated an increased range of values in motor neuron layer interferometric strength.

In [10], the authors of this article describe the relationship between retinal image characteristics and cerebral-amyloid (A) load in the hopes of establishing a benign method for predicting A deposit in Alzheimer’s illness. Moreover, while comparing to A+ individuals, a substantial variation in textural predefined sequence across retina capillaries and their neighbouring areas was detected in A+ participants. Using the collected characteristics, classifiers are trained to classify new individuals. Including an efficiency of 85 percent, the classification can distinguish A+ patients from “A” patients.

3. Proposed Work

This section presents the description of the proposed model for the classification of transgenic mice using SVMs.

3.1. Preprocessing

For enhancing the information for the disease diagnosis system, it is necessary to use some of the preprocessing steps as follows:(i)Artifacts removal: blurriness, poor edges, and illumination are called as artefacts, which are removed using the nonlinear diffusion filtering algorithm, which eliminates all kinds of artefacts and ensures the image quality in terms of illumination correction and edge preservation(ii)Contrast enhancement: low contrast is one of the important issues of image classification. In this work, we consider that contrast enhancement is an optimization problem that intention is to optimize the pixel values based on the contrasting level of the input image.(iii)Image normalization: normalization of the image is valuable to variation of pixel intensity or RGB color values for retina images that increase the quality of acquired fundus images by decreasing the equipment and desired noises of the retina images. Following, the misrepresentations and fluctuations that happened in the retina images because of inexact image internment are recognized. Throughout the normalization of the image, the learned image is transformed into predetermined values. The formula for the estimation of image normalization is exactly denoted as follows. Image normalization is a technique of preprocessing that uses certain types of range as an expected outcome for the given inputs. It is useful for the prediction of forecasting purposes. Here, we know that there are several ways for forecasting and also prediction to maintain the large variations and also forecasting the normalized values makes the closer. There are some existing normalization techniques that are used for image normalization, which are as follows:(i)Min-max normalization(ii)Z-score normalization(iii)Decimal scaling

Figure 2 describes the proposed work. In the following, the description of these normalization techniques is given in detail.(i)Min-max normalization: this technique provides the transformation function for linear cases by the original values of data which is known as the min-max normalization technique. This technique uses predefined boundary for the specific retina images. The min-max normalization for the proposed technique is estimated as follows:  where represents the normalized value of min-max data values and when the predefined boundary is between the C and D. When the range of values of A and B is matched between one another is used for result validation.(ii)In general unstructured data can be normalized using Z-score normalization, which is represented as follows: where is the normalized values of the input, and represents the row E of the ith column. where is the mean value of the inputs. This technique uses five rows such as , and V for different columns for “” for each row in which each row represents the Z-score technique that applies for computation of the normalized values. So that the standard deviation of the row is equal to the zero, then all values for the row are fixed to the zero values. It also gives the range of values between 0 and 1. In the technique of decimal scaling, the range is between −1 and 1. Based on the decimal scaling for image normalization, it is computed by the following equation:where represents the scaled values, represents the range of values, and represents the small integer . The above-mentioned techniques can be useful for discussing the values of normalization.

The combination of the above three techniques helps in producing the result, that is, improved min-max decimal with Z_normalization). The proposed retina image normalization technique is the advanced and most effective normalization technique that uses various types of input images, and also, it produces outputs in the range of 0 to 1. The normalization techniques can be possible for taking the average values as a threshold and then normalizing or replacing the values of the other side of pixels using the mean and standard deviation.

As compared to the min-max, Z-score, and decimal scaling techniques for image normalization, the proposed advanced technique for image normalization produces an effective result. The proposed technique is used for image normalization that produces the following advantages than the other existing methods.(i)Suited for any volume of datasets (large, small, or medium size datasets)(ii)Individual pixel-based scaling and transformation are possible(iii)Used to make the independent data size(iv)Set the range between 0 and 1 and have the normalized values(v)Easy to apply for whole numerical data values

The proposed innovative normalization technique is mathematically expressed as follows:where represents the particular element of the data, represents the number of digits in the element of , represents the pixel element for 1st digit , and represents the scaled 1 value between 0 and 1. The proposed model is applicable for all types of input lengths to the full types of integers. This technique is different than the existing normalization approaches which are as follows:(i)Changed from the unstructured to the structured one.(ii)Purpose of formulation/scaling.(iii)All the inputs are numerical data only.(iv)Low light enhancement: recent methods for low light enhancement methods are not assured for applying in low light environments. In order to design the new method for low light enhancement, it should focus on the following.(v)Enhance the efficiency and robustness of the low-light image enhancement algorithms, and the previous methods are not supported for insufficient techniques to meet the needs of current applications.(vi)This method should be able to adjust for the different types of images on different scales to produce an extraordinary result.(vii)Minimize the complexity (time, space) for overall computations that are available to all the methods. This satisfies the practical application, and also, real-time images must be supported to use this.(viii)Most of the existing techniques are used for longer operations and hence take more processing time. And still, it leads to two problems such as detail ambiguity and color deviations.(ix)Establishes the higher quality of the image evaluation in which image information recovery and color recovery functions are used for adjusting the low light enhancements.

To address these issues for this step, multiscale Retinex theory is proposed, which is a color restoration method that processes the image quality for further enhancement using the single-scale Retinex or multiscale Retinex method. This algorithm is applied for 3 kinds of color channels such as R, G, and B separately. Thus, here, the original image is converted into the number of channels. This avoids the color distortion issue. For each algorithm, the color recovery factor is computed. This computes the proportional relationship between the R, G, and B channels. This mathematically expressed equation is as follows:where represents the function for mapping the color values, and the performance of the best color intensity values for restoration and recovery helps in mapping the function in which logarithmic is used for computations of color recovery.where and are the mathematical expressions for variables in which logarithmic function is computed and rewritten as follows:

This algorithm considers the merits of the convolution operation using Gaussian computations. For the multiscale, that is, small, medium, and large range of patches yield good ideal effects. The performance of color restoration is improved using the color recovery factor values since it is updated for concurrent iterations.

3.2. Feature Extraction

The gray level co-occurrence matrix captures numerical features of a texture using spatial relations of similar gray tones. The following are the features derivable from a normalized co-occurrence matrix in Table 1.

3.2.1. Computation of Textural Features from Normalized GLCM
Energy: measures the uniformity (or orderliness) of the gray level distribution of the imageRange = [0 1]Homogeneity: measures the smoothness (homogeneity) of the gray level distribution of the image; range = [0 1] Contrast: Tables 2 and 3 give a measure of the intensity contrast between a pixel and its neighbor over the whole imageRange = [0 (size(GLCM,1)-1)2]
3.3. Classification

For the classification of retinal images into two classes such as WT and TMM, SVMs are used, and the optimum kernel function is selected from the set of kernel functions for the classifications. Figure 3 discusses the pictorial representation for the classification using SVMs.

4. Experimental Results and Discussion

This proposed work is mainly implemented to provide precise classification between the WT and TMM for obtained accuracy of the diseases in an accurate manner. This proposed work undergoes preprocessing and feature extraction. The preprocessing technique is performed to minimize the level of the artifacts in the input image, and the enhancement of the contrast level is performed to increase the precise identification of features from the preprocessed image. Then, feature extraction is implemented to classify the images into two classes based on the extraction of significant features in an effective manner.

In this section, the performance of the proposed model is implemented for the sum of images in the dataset in Figure 4. Table 4 describes the confusion matrix for the two classes with the use of four kinds of metrics. The definition of each metric is given below, and classifier performance is shown in Table 5.(i)True positive (TP) is the no. of candidates correctly identified as TMM(ii)False positive (FP) is the no. of candidates incorrectly identified as TMM(iii)True negative (TN) is the no. of candidates correctly identified as non-TMM(iv)False negative (FN) is the no. of candidates incorrectly identified as non-TM

5. Conclusion

Alzheimer’s disease is a progressive neurodegenerative illness defined by the presence of Amyloid–(A) in the brain. Nevertheless, because the degenerative concerns of the brain are complicated in classification, precise detection of this condition is a difficult process. The abnormalities in retinal fundus images for Alzheimer’s disease are divided into two categories in this paper: wild-type (WT) and transgenic mice model (TMM). Optical coherence tomography (OCT) pictures are utilised to classify the patients into 2 categories for assessment. SVMs are used to classify the data, only with the best kernel selected via an evolutionary method. The RBF kernel function outperforms the other SVM support vectors in terms of accuracy. The textural properties of retinal fundus images are used to deal with just an efficient categorization utilising SVM. The overall accuracy reached 92% and 91% of precision for the classification of transgenic mice.

Data Availability

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.