Abstract
It has become an inevitable trend for medical personnel to analyze and diagnose Alzheimer’s disease (AD) in different stages by combining functional magnetic resonance imaging (fMRI) and artificial intelligence technologies such as deep learning in the future. In this paper, a classification method was proposed for AD based on two different transformation images of fMRI and improved the 3DPCANet model and canonical correlation analysis (CCA). The main ideas include that, firstly, fMRI images were preprocessed, and subsequently, mean regional homogeneity (mReHo) and mean amplitude of low-frequency amplitude (mALFF) transformation were performed for the preprocessed images. Then, mReHo and mALFF images were extracted features using the improved 3DPCANet, and these two kinds of the extracted features were fused by CCA. Finally, the support vector machine (SVM) was used to classify AD patients with different stages. Experimental results showed that the proposed approach was robust and effective. Classification accuracy for significant memory concern (SMC) vs. mild cognitive impairment (MCI), normal control (NC) vs. AD, and NC vs. SMC, respectively, reached 95.00%, 92.00%, and 91.30%, which adequately proved the feasibility and effectiveness of the proposed method.
1. Introduction
Alzheimer’s disease (AD) [1] is one of the currently incurable brain disorders, characterized by insidious onset and continuous development, which can cause a continuous decline of patient’s cognitive and memory abilities, eventually can lead to abnormal life. Studies [2–4] suggested that significant memory concern (SMC) may be an early stage of mild cognitive impairment (MCI) and AD. Clinical symptoms are objectively poor memory and cognitive decline accompanied by changes in brain structure. And it maybe evolves into AD. It is very important to accurately diagnose the disease situation of patient because at present AD cannot be cured completely and can only be slowed or be prevented to further develop by treatment. In addition, the diagnosis of AD requires a mass of medical data which is inhomogeneous, so medical staff bear the heavy burden caused by man-made data analysis.
With the rapid development of deep learning [5–8] and medical imaging technologies, more and more researchers used medical imaging means such as magnetic resonance imaging (MRI) [9–12], positron emission computed tomography (PET), computed tomography (CT), and deep learning method to assist medical personnel to accurately diagnose and treat AD patients with various stages. Huang et al. [13] improved deep learning network, called VGGNet, which was utilized to classify three-dimensional (3D) images. In the experiments, a classifier was trained using T1-MRI and FDG-PET images, and high precision was achieved. Islam and Zhang [14] proposed a convolutional neural network by combining dense network modules. The experimental results showed that the accuracy of the nondementia stage was 99%. Zhang et al. [15] designed a convolutional neural network to extract the features of dual modalities including PET and MRI images. The extracted features and information resulted from the minimental status test (MMSE) and the clinical dementia rating (CDR) were fused. The accuracies of AD and normal control (NC), MCI and NC, and AD and MCI were 100%, 96.58%, and 97.43%, respectively. Jain et al. [16] adopted a mathematical model based on a convolutional neural network (CNN) with transfer learning to diagnose AD. In this model, VGG-16 trained on the ImageNet dataset was used as a feature extractor for classification tasks. The classification accuracies of AD and NC, AD and MCI, and NC and MCI reach separately 99.14%, 99.30%, and 99.22%. Most of the above methods are involved in binary classification in AD, NC, and MCI. However, during the evolution from NC to AD, some other stages such as SMC exist. Therefore, in this paper, some subtype classification research on AD with various stages is made.
PCANet is one of the common convolutional neural networks proposed by Chen et al. [17]. Subsequently, Li et al. [18] improved PCANet network from two-dimensional (2D) CNN to three-dimension (3D) CNN and diagnosed AD patients using structure MRI (sMRI). In this paper based on the 3D PCANet, the max-pooling layer and rectified linear unit (ReLU) layer are added behind each convolution layer to reduce the redundancy of image features. The improved 3D PCANet model is used to extract texture and nonlinear features of brain images. Experimental results demonstrate that the improved method can effectively increase the accuracy of classification.
As a noninvasively imaging technology, fMRI [19] is used to measure spontaneous brain activity which can reflect the status of different brain regions at different times. Many studies suggest that different levels of functional characteristics of fMRI such as amplitude of low-frequency amplitude (ALFF) [20], regional homogeneity (ReHo) [21], and regional functional correlation strength (RFCS) [22] can reflect brain diseases, which can assist medical personnel to diagnose brain diseases. Dai et al. [23] used different types of transforms on fMRI data including ALFF, ReHo, RFCS, and gray matter density (GMD) data and combined with multilevel characterization with multiclassifier (M3) to realize the diagnosis of AD patients. Good results were obtained. However, when multimodal data directly are used, feature redundancy often happens, and classification results will further be influenced. Aiming at the above problem, in this paper, two functional image transforms are selected for fMRI images including ALFF and ReHo and are used to extract features. Then, canonical correlation analysis (CCA) is used to fuse these two features.
Inspired by the above ideas, a method to diagnose AD based on different functional characteristics of fMRI and CCA fusion strategy is proposed in this paper. First, fMRI images are preprocessed and transformed into mReHo (mean ReHo) and mALFF (mean ALFF) images. Then, these two kinds of transformed images are inputted into the improved 3D PCANet model, respectively, for feature extraction. Next, these two features are fused by CCA. Finally, support vector machine (SVM) is utilized to classify. Contributions of this paper are as follows. (1)Because fMRI data are four-dimensional (4D) form, and features cannot be directly extracted, 4D fMRI images are converted into 3D form using image transformation such as ALFF and ReHo(2)Traditional 3D PCANet network is improved by adding a maximum pooling layer behind each convolution layer which is used to extract image features. So, feature redundancy and human error can be effectively reduced(3)CCA is used to fuse two kinds of image features, and the accuracy of the model classification is improved(4)AD patients with different stages, especially including SMC, are fully automatically classified, which can assist medical personnel to accurately diagnose and analyze AD
The rest of this paper is organized as follows. In Section 2, we introduced the experimental dataset and proposed method, respectively. In Section 3, we gave the experimental results and analysis of our proposed method and compared methods. A conclusion is drawn in Section 4.
2. Methodology
The framework diagram of our proposed method is shown in Figure 1. Specifically, first, fMRI images were preprocessed and transformed. Then, the transformed images were extracted features using the improved 3DPCANet. Then, these two kinds of features were fused by CCA. Finally, SVM was used to classify AD patients with different stages. The detailed steps are explained in the following sections.

2.1. Data Preprocessing
fMRI used in this study were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The fMRI dataset includes 34 patients with AD, 26 patients with SMC, 57 patients with EMCI, 35 patients with LMCI, 38 patients with MCI, and 50 NC. Detailed information is shown in Table 1.
fMRI data analysis is used to DPARSF [24] toolkit. Due to the instability of the initial fMRI signal, the first 10 time points of each fMRI data are deleted, and the points are made timing correction, realigned, and normalized. The images are registered to the template proposed by the Montreal Neurological Institute (MNI). The preprocessed images are shown in Figure 2.

2.2. ALFF Transform Images
Amplitude of low-frequency fluctuation (ALFF) value is the root mean square of the power spectrum of blood oxygen level dependent (BOLD) signal in the low-frequency band (0.01 Hz-0.08 Hz). The steps are as follows: (1)The time series of each voxel after removing the linear drift is passed through a filter with 0.01 Hz-0.08 Hz band(2)The filtering result is made fast Fourier transformation to obtain the power spectrum(3)The root means square of the power spectrum is calculated(4)The average value of step (3) is obtained which is ALFF
Low-frequency signal energy is utilized to represent the activity of neurons in different brain regions. Mean amplitude of low-frequency fluctuations (mALFF) is obtained by dividing the average ALFF of all voxels in the whole brain, because the brain structure of AD patients has changed, and likewise, it is believed that the activity of neurons in each brain area will also change compared with that of the normal control group. The preprocessed fMRI data are calculated mALFF. The after mALFF transformation images are shown in Figure 3.

2.3. ReHo Transform Images
ReHo method was originally proposed by Jiang and Zuo [21]. which was used to measure the regional synchronization degree of fMRI time course. ReHo assumes that the selected voxels have temporary similarity with their adjacent voxels which is measured by Kendall’s harmony coefficient.
is used to represent fMRI data, where is the number of rows, is the number of columns, is the number of layers, and is the number of time points for each voxel (the length of time series). The data contains voxels. Voxel is denoted by (, , ), and the local consistency of the time series of th voxel and nearest neighborhood voxels (usually is 6, 18, 26) is calculated as follows: (1)The time series of voxel is expressed as a matrix with size pixels, where represents the th time point of th voxel(2)The element value in column vector is replaced by the rank of its column which is the ordinal number of the value of the element in th column, and the matrix with size is obtained where represents the rank at the th time point of the th voxel(3)Kendall harmony coefficient of the time series of ()th voxel is calculated, as shown in the following formulawhere is the rank sum of time point , and is the average value of . represents the local consistency of voxel . The closer it is to 1, the greater the similarity of the time series is.
The voxels in the ReHo transform images show similar at the same time series. The greater Kendall’s harmony coefficient is, the more similar these time series are. Mean ReHo (mReHo) is obtained by dividing the average ReHo value of the whole brain. After smoothing, the processed images are shown in Figure 4.

2.4. Improved 3DPCANet
PCANet [18] is a simple convolutional neural network, which mainly includes the principal component analysis (PCA), convolutional layer, binary hash, and block histogram. The objective of PCA is to achieve the eigenvector of the target matrix, and the eigenvector is taken as the convolution kernel parameter. The role of binary hashing and block histogram is indexing and pooling. In this paper, based on the traditional 3DPCANet, the max-pooling layer and ReLU layer are added behind each convolution layer to reduce the redundancy of image features after convolution and, similarly, to study texture features and nonlinear features of brain images. Subsequently, the classification accuracy improved. training images with size are considered as the input of 3DPCANet. Features are extracted process using 3DPCANet as follows. (a)Input layer
The image of all the training samples is sided-cut into a block with the size of . The patch is produced by the image. Subsequently, the patch is vectorized and standardized, i.e., , where represents the th vector of th image (as shown in Equation (2)).
All voxel patches are processed by Equation (2) to obtain matrix . where are expressed different vectors from the same image. Whole training images are processed through Equation (3) to obtain matrix (as shown in Equation (4)).
Then, the matrix is processed dimensionality reduction by PCA, and PCA minimizes the reconstruction error on a group of standard orthogonal filters, which is described as where is the identity matrix with size . The solution of this formula is the eigenvector of . The expression of the PCA filter is as shown in Equation (6). where is a function which maps the vector to the matrix . represents the th feature vector of , and is the th filter generated in the first step. The PCA filter is convolved with the th training image in the training image, which is expressed by where the symbol “” represents convolution, and the filter is used to convolve all B training images and to generate images. Then, the max-pooling and ReLU are performed on the image generated by formula (7), which is expressed by where the symbol “” represents the max-pooling operation. denotes the max-pooling layer in the first step, and represents the image after the maximum pooling layer and ReLU layer are processed. (b)Middle layer
In the middle layer of PCA calculation, images are generated using the th image in the training images by the first step, among which th image is performed similar operations such as Equations (4), and matrix is gotten.
The filter is obtained by PCA in the first stage on the matrix . The image generated by formula (8) in the first step is convolved by the obtained PCA filter, which is described by formula (10).
Among images generated in the first step, each image is used to generate images by formula (8). The max-pooling and ReLU layer processing are performed on the image after convolution, as described by where denotes the max-pooling layer in the middle layer, and the image is generated by max-pooling and ReLU layer operation. (c)Output layer
The Heaviside function is used to binarize all images, and weighted processing is performed to get .
Finally, blocks with size from each image are divided in the form of overlapping or nonoverlapping. In the program, we use to represent the overlapping rate between blocks. The histogram of each block is made statistics shown by where is a function of block division, histogram statistics, and concatenation of image . represents the final eigenvector of the th training images using 3DPCANet.
The hyperparameters of the improved 3DPCANet include the size of the block, filtering parameters and in each stage, the number of blocks, and overlapping rate R between blocks in the output layer.
In conclusion, convolutional kernel parameters of improved 3DPCANet are studies by PCA, and improved 3DPCANet does not require back propagation. Therefore, improved 3DPCANet need not a host of dataset training. It is suitable for small datasets. Due to the small amount of AD data, improved 3DPCANet is used as the feature extraction model in this paper. We use improved 3DPCANet to extract features of the transformation images mALFF and mReHo, respectively. Then, these two kinds of features are fused by CCA.
2.5. Canonical Correlation Analysis
CCA [25] is one of the algorithms which is used to find correlations between different kinds of data. and is assumed to represent different kinds of datasets, respectively, where represented the number of samples of the two datasets. Similarly, and is represented dimensions of two data features, respectively. CCA is used to reduce the dimension of and . Likewise, the feature vectors and of -dimension are obtained as described by where and represented the projection vectors of and , respectively. The projection criterion of CCA is that when the number of dimensions of the two sets of data is reduced to dimension, the correlation coefficient of them is the largest. The objective function of CCA is showed by where and is obtained by maximizing to get the corresponding projection vectors and .
In this paper, CCA was used to find the correlation features between the transform images including mALFF and mReHo of the same patient. The correlation features were fused. Subsequently, fused features were inputted into SVM classifier to achieve the classification of AD patients with different stages.
2.6. SVM
Support vector machine (SVM) [26] is a supervised learning classifier, and the maximal margin hyperplane of learning samples is obtained when making boundary decisions. The decision function of the SVM classifier is expressed by where represents the hyperplane, and is the normal vector of the hyperplane. denotes the eigenvector, and represents the bias. is the relaxation coefficient, represents the penalty factor, and represents the number of sample. Sequential minimal optimization (SMO) is the most common method to find the global optimal solution of SVM. In addition to SMO algorithm, other methods including elephant herding optimization [31] and krill herd algorithm [32] also can solve the above same problem. SVM has two design methods including one-to-one and one-to-many. In this paper, we choose one-to-one way for SVM classifier and SMO optimization algorithm.
3. Experimental Results and Analysis
In this paper, two kinds of image transformation such as mALFF and mReHo are used. The improved 3DPCANet is used for feature extraction, and SVM is used for classification. All the deep learning models used in this study were built using the pytorch framework, running on a server with a 1.7 GHz Intel Xeon E5-2603 v4 CPU, 16.0 GB RAM, NVIDIA RTX2070 GPU 8 GB, and the Windows 10 (64-bit version) operating system. The evaluation indicators used in this paper include accuracy, sensitivity, and specificity (as shown in Equation (17)).
TP and TN, respectively, represent the number of true-positive and true-negative subjects. FP and FN, respectively, represent the number of false-positive and false-negative subjects. The positive class label is 1, and the negative class label is 0.
In the traditional experiment, only the accuracy is used as evaluation criteria of the model, and consequently, the algorithm performance cannot be roundly evaluated in numerous aspects. For original dataset in this paper, the positive class is far greater than the negative class. Imbalance of data is serious. Therefore, the value and the area under curve (AUC) of receiver-operating characteristic (ROC) and the coordinate axis are both used as the evaluation index, among which the is calculated by precision and sensitivity (as shown in Equation (18)). (1)The role of improved 3DPCANet
Transformation images mALFF and mReHo were extracted features using improved 3DPCANet, and classifier was used SVM. For demonstrating the performance of improved 3DPCANet, the results, respectively, using traditional and improved 3DPCANet were shown in Table 2. The optimal results are obtained by parameter optimization of grid search method. Similarly, the hyperparameters , , , , and range of improved 3DPCANet are, respectively (2, 8), (2, 6), (2, 6), (0, 0.6), and (5, 25).
As can be seen from the results in Table 2 that for mALFF transformation, the experimental results were significantly improved with MCI vs. AD, other experiments also have different improvements on the basis of maintaining the original results. For mReHo transformation, results of 3 groups of data are improved significantly on all indicators in SMC vs. MCI, SMC vs. AD, and EMCI vs. LMCI. The experimental results show that using improved 3DPCANet, more discriminative image features can be extracted, and the classification results are effectively promoted because the max-pooling layer and ReLU layer are added behind each convolution layer to reduce the redundancy of image features after convolution. So, the improved 3DPCANet is used in subsequent experiments for feature extraction. In addition, because the results on mReHo with 0.01-0.08 Hz are better than those with 0.01-0.04 Hz, in the subsequent experiments, mReHo transformation is used by 0.01-0.08 Hz. (2)The role of fusion strategies
To verify the effectiveness of the data fusion, multimodal data fusion and multiple classifiers are used in this paper. The detailed results are shown in Table 3.
As can be seen from the results in Table 3 that compared with single-modal, fusion method can effectively improve the experimental results. However, the results of direct fusion of two data in series are still poor due to feature redundancy. If CCA is used to fuse the features of two transformed images, better results for AD patients at different stages are obtained because CCA can find the most relevant classification features of the two images, and the fusion features enhance the classification discriminative power. Among these results, the classification accuracy of SMC vs. MCI is 95.00%, and the value and AUC are 95.65% and 92.71%, respectively. Evaluation indicators of NC vs. SMC, NC vs. MCI, NC vs. AD, and MCI vs. AD have been improved compared with those of single modal. Obviously, more effective classification features of AD patients at different stages can be mined by CCA.
SMC and MCI are early stages of AD, so brain structure changes are small and clinical diagnosis can easily result in misdiagnosis. NC vs. MCI and SMC vs. MCI are classified with accuracy of 88.89% and 95.00%. are 86.96% and 95.65%, and AUC are 82.22% and 92.71%, respectively. The experimental results show that the proposed method in this paper can effectively classify the different stages of AD, including the initial stage that is difficult to diagnose. In addition, because the results on SVM are better than those with softmax classifier, in this paper, SVM is used in subsequent experiments.
Figure 5 is the ROC curve of NC vs. SMC, NC vs. AD, SMC vs. MCI, and MCI vs. AD. Each subimage includes 4 groups of experiments, namely, mALFF and mReHo image classification, mALFF and mReHo tandem fusion classification, and mALFF and mReHo CCA fusion classification. Specificity is presented as the abscissa, and sensitivity is represented by the ordinate. It can be seen from the ROC curves in the 4 subimages that the proposed method in this paper has the largest AUC area compared with the single-modal method and direct fusion in series.

(a)

(b)

(c)

(d)
In this paper, mReHo map and mALFF map on MCI vs. AD, NC vs. AD, NC vs. MCI, NC vs. SMC, SMC vs. AD, and SMC vs. MCI are visualized by using REST Slice Viewer, respectively, as shown in Figures 6 and 7. By these maps, we can find the differences using two samples -test.


It can be seen from Figures 6 and 7 that brain regions are influences by mReHo transformation including precentral, calcarine, posterior cingulate cortex, cuneus, lingual, medial and paracingulate gyrus, superior occipital gyrus, fusiform, superior parietal gyrus, middle temporal gyrus, and hippocampus. And brain regions are influenced by mALFF transformation including fusiform, inferior temporal gyrus, hippocampus, middle occipital gyrus, calcarine, middle temporal gyrus, precentral, lingual, and cingulate gyrus. So, the brain regions such as fusiform, hippocampus, calcarine, middle temporal gyrus, precentral, and lingual contributes important role for classification, and these regions are also focused on in this paper. (3)Comparison results of different methods
In this paper, experimental results were compared with the state-of-the-art methods. As can be seen from the results in Table 4. Fewer experimental datasets were used to obtain better classification results in NC vs. MCI and MCI vs. AD, and the NC vs. AD experimental results are close to the results of other methods. Because CCA is used to perform dimensionality reduction and fusion processing on fMRI transformation images in the proposed method in this paper, it can effectively reduce feature redundancy and image noise. Therefore, the accuracy of image classification is increased, and the best experimental results are obtained.
4. Conclusion
In this paper, an AD classification method based on image transformation and features fusion is proposed. The main ideas include that, firstly, fMRI data, respectively, are made image transformation on mALFF and mReHo. Then, an improved 3DPCANet for feature extraction in two kinds of transformation images is proposed and is, respectively, used to extract features, and these two kinds of features are fused by CCA. Finally, SVM is used to classify AD patients with different stages. SMO was used to find the global optimal solution for SVM. Besides the SMO method, some of the most representatively computational intelligence algorithms can also be used to solve the above problem, like monarch butterfly optimization (MBO), earthworm optimization algorithm (EWA), elephant herding optimization (EHO), moth search (MS) algorithm, slime mould algorithm (SMA), and Harris hawks optimization (HHO). The experimental results show that in the proposed method, improved 3DPCANet reduces feature redundancy and image noise, and texture and nonlinear features of brain images can be extracted, because the maximum pooling layer and ReLU layer are added behind each convolutional layer, which makes the classification features more abundant and robust. Compared with the single model method, if the fusion strategy of two fMRI features like CCA is used, better results can be obtained, which show that fusion strategy can assist medical personnel to accurately diagnose SMC, MCI, EMCI, LMCI, and AD patients.
Data Availability
fMRI dataset used in this study comes from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The fMRI dataset includes 34 AD patients, 57 EMCI patients, 35 LMCI patients, 26 SMD patients, and 50 NC. Experimental data is obtained by sending an email to the ADNI and signing the related agreement. Since, in this laboratory, the classification of Alzheimer’s disease is studied by the fusion of fMRI and sMRI image information, the subjects possessing fMRI images and sMRI images are selected in the ADNI dataset. The link on ADNI dataset is http://adni.loni.usc.edu/.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this manuscript.
Acknowledgments
This work is supported by the Joint Project of Beijing Natural Science Foundation and Beijing Municipal Education Commission (no. KZ202110011015).