Abstract
Cognitive impairment has a significantly negative impact on global healthcare and the community. Holding a person’s cognition and mental retention among older adults is improbable with aging. Early detection of cognitive impairment will decline the most significant impact of extended disease to permanent mental damage. This paper aims to develop a machine learning model to detect and differentiate cognitive impairment categories like severe, moderate, mild, and normal by analyzing neurophysical and physical data. Keystroke and smartwatch have been used to extract individuals’ neurophysical and physical data, respectively. An advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) is proposed to classify the cognitive severity level (absence, mild, moderate, and severe) based on the Standardised Mini-Mental State Examination (SMMSE) questionnaire scores. The statistical method “Pearson’s correlation” and the wrapper feature selection technique have been used to analyze and select the best features. Then, we have conducted our proposed algorithm GBM on those features. And the result has shown an accuracy of more than 94%. This paper has added a new dimension to the state-of-the-art to predict cognitive impairment by implementing neurophysical data and physical data together.
1. Introduction
Cognitive impairment, also known as neurocognitive disorders, is a loss of cognitive function. It has destructive effects on people and the community as well. People with this condition have problems with perception, attention, and memory; meanwhile, these are essential things to build human cognition and psychiatric disorders (e.g., depression, insomnia, psychotic symptoms, etc.) [1–3] and even physical diseases, such as diabetes mellitus (DM) and cardiovascular diseases [4]. People with cognitive impairment also experience a diminished quality of life [5].
Cognitive impairment can cause many psychological symptoms in patients [6]. Its devastating consequences may increase the risk of dementia [7]. A study has shown that about 30–40% of cases with cognitive impairment subsequently progress to dementia [8]. The total assessed expense of dementia was US$818 billion in 2015, implying 1.09% of worldwide total domestic product [9]. The economic difficulty and pathological complexities among victims with cognitive impairment are undoubtedly more crucial [10]. Researchers have figured out that by 2030, people with dementia will be about 75 million, and this contingency will cost the community US$ 2 trillion [11]. Early detection of cognitive impairment status supports a sufferer by allowing them to plan for the future and early treatment [12–14].
At present, an ideal approach to confine or limit this overwhelming course is identifying danger in individuals and starting intervention early [15]. Many researchers have explored neurobiological, hereditary, EEG signal, and neuroimaging biomarkers for cognitive impairment diagnosis, especially in Alzheimer’s disease [15, 16] and also dementia [17]. Magnetic resonance imaging (MRI) [18] and neuroimaging techniques were broadly used to detect cognitive impairment [19–21]. Many AI-inspired approaches have been discovered, yet no quantitative analysis of accomplishment is proposed. AI approaches using machine learning, artificial neural network, and deep learning show some significant improvement in impairment detection but still have challenging issues.
We have proposed an advanced ensemble learning algorithm named Gradient Boosting Machine (GBM) to detect cognitive impairment among older adults. Data obtained from the smartwatch and keystroke were preprocessed and analyzed through Pearson’s correlations. Then, the wrapper feature selection technique was used to select the best features. Experimented algorithms were chosen by observing the distribution (standard deviation, outliers, etc.) of our dataset. The selective features have been trained and tested with proposed algorithms to determine the best prediction results. Our proposed method highlights the following:(1)We have proposed a combination of physical and neurophysical data to detect cognitive impairment levels.(2)A conventional customized machine learning technique is performed to detect cognitive impairment, and classification performances are compared with other models.(3)The accuracy is higher for this quantitative analysis of detecting cognitive impairment.(4)In particular, our proposed method has the best accuracy of predicting mild cognitive impairment (MCI) than previous work.
The health care services area is perhaps the leading region for AI applications. It is quite possibly the most complex field [22] and may be the most testing, particularly in the areas of conclusion and expectation [23]. Given that early mediation can decrease cognitive deterioration, current cognitive appraisals can be ineffective and develop older adults’ technology use. So, our proposed methodology can make a turnover to lead a happy life for older adults.
2. Related Works
There are many kinds of research ongoing on the prediction of cognitive impairment using simple-to-deep learning algorithms. Artificial neural network (ANN) algorithm has been used to distinguish the cognitive state using multicenter neuropsychological test data with magnificent accuracy [24]. Reference [24] was confined to neuropsychological tools for diagnosing cognitive impairment. Random forest survival analysis and semiparametric survival analysis (Cox proportional hazards) were combinedly used to evaluate the relative significance of 52 predictors in predicting cognitive impairment and dementia immensely [25]. Reference [25] was time-consuming research, having some limitations. One is that predictive correlations were focused on correlational analysis, which is implicitly bidirectional. The other is that cognitive outcome calculations were based on a success index for self-respondents and a ranking measure for proxy respondents rather than on clinical diagnosis. Artificial Intelligence (AI) approaches, including supervised and unsupervised machine learning (ML), deep learning, and natural language processing, have been applied for cognitive impairment by providing a conceptual overview of this topic, emphasizing the features explored [26]. A more effective method has been experimented for monitoring cognitive function using keystrokes [27] and linguistic characteristics with IT [28]. There are some limitations mentioned that should be solved, like security concerns about providing personal data. The “Panoramix suite-6” serious digital games (“Episodix,” “Attentix,” “Semantix,” “Workix,” “Procedurix,” and “Gnosix”) scores datasets have been experimented through some renowned ML (SVM, CART, and LR) algorithms to detect cognitive impairment [29]. But it may discriminate in result when targeting older adults. Based on the b test’s accuracy, a model has been developed to detect cognitive symptoms malingering in predicting malingerers of mild cognitive impairment [30]. This research was based on the medical symptoms of patient datasets. These models’ applicability has spread in different directions [31, 32]. Magnetic resonance imaging (MRI) [33], in combination with multiplex neural networks [34], and resting-state functional magnetic resonance imaging (rs-fMRI), in combination with graph theory [35], have been used to isolate healthy brains from progressive mild cognitive impairment (pMCI), in the diagnosis of AD and MCI. These researches were based on functional data. When applied to functional data from groups of healthy control subjects and MCI and AD patients, AD and MCI could be identified as induced causes to the brain network. Based on the cognitive neuroscience researchers’ abnormal activity routines datasets, a novel hybrid statistical-symbolical technique can detect cognitive impairment [36]. This study achieved promising results. But the recognition method was based on only nonprobabilistic rules that strictly determine the detection of an abnormal behavior based on a user-defined set of observations. Besides, based on routine primary care patient datasets, conventional statistical methods and modern machine learning algorithms have been used to develop a risk score [37] to determine how people may build dementia [38]. Few research studies have been published where a systematic [39], quantitative, and critical review [40] has been analyzed to predict cognitive impairment and dementia using different machine learning techniques. Few research studies have also developed machine learning algorithms to detect cognitive impairment based on authorized clinical questionnaires’ datasets only [41, 42].
3. Materials and Method
This study aims to develop a model for classifying cognitive impairment levels using keystroke patterns and physical activity information. Figure 1 represents a flowchart describing the whole development of the system, which consists of four phases. In the data collection phase (Figure 1(a)), three types (keystroke patterns, physical, and SMMSE score) of data have been collected. In the data collection phase, keystroke patterns data as neurophysical data are collected from a developed android application. Regular physical activity data is collected from smartwatches, and SMMSE data is collected from the questionnaire session, as shown in Figure 1(a). After extracting features, feature analysis has been performed to determine the correlations in features and then select the highly correlated features, as shown in Figure 1(b). After Analyzing the dataset feature, a machine learning algorithm has been chosen, as shown in the machine learning approach phase (Figure 1(c)). The result analysis phase has demonstrated the relation between features output and SMMSE score output using the regression model and showing the validation using “10-fold cross-validation” (Figure 1(d)).

Participants’ mental health status in terms of cognitive impairment has been assessed using the twelve-item Standardised Mini-Mental State Examination (SMMSE). The British Columbia Ministry of Health validates this SMMSE approach, and the questionnaire can also be found on their website [43]. Several research types [44–47] used these questions for related cognitive issues. For this study, we also selected these questions, and 33 participants were asked SMMSE questions to generate the SMMSE score. This score represents the cognitive impairment’s actual value to label the participant for group selection. There are 26 males and seven females whose age range is between 50 and 65. They were followed up for up to 6 months. In this study, the participant’s cognitive impairment levels have been categorized into four types based on SMMSE score: normal (SMMSE score 25), mild (21 SMMSE score 24), moderate (10 SMMSE score 21), and severe (SMMSE score 9). Table 1 represents the distribution of the cognitive impairment scores based on the SMMSE. Every day, SMMSE scores were collected from participants. Some data were excluded because of insufficient information. Table 2 represents a sample of our datasets.
3.1. Data Collection
We have collected the datasets from the Bangladesh research organization. The study’s motive was to detect cognitive impairment via keyboard stroke patterns and activities they performed every day, such as sleeping, walking, etc. This study’s data is collected using smart environment technologies, including android applications and wearable smartwatches.
On a particular day, the participant came to the research center smart apartment and performed keyboard stroke patterns activity, and this neurophysical data was recorded. Physical data were collected from the smartwatch they wore all day long. Also, the SMMSE score was generated through a questionnaire session.
Participants were assigned identifiers during the study. The identifiers have been randomized before this data was made available to the research.
3.2. Data Preprocessing
3.2.1. SMMSE Score Estimation
The SMMSE score has been taken at the beginning of the study and represents the participant’s cognitive impairment severity. The presented study explores each extracted feature correlated with cognitive impairment symptoms that can differentiate participants from cognitive impairment. This questionnaire score was estimated by using a linear regression model [48] on extracted features. The standard linear regression model can be represented as follows:where Ēiesmmse is the estimated score of SMMSE for ith participants, fn is the n number of features, and are the coefficients of the linear regression model.
The lasso regularization [49] was used to minimize the error between the estimated score and the actual SMMSE score. The lasso regularization restricted the regression model coefficient to become too high. It performed well in the model as all the features were highly correlated.where Ēlr is the lasso regularization. The first part of equation (2) represents the “residual sum of squares,” and the other part represents as the “sum of the absolute value of the magnitude of coefficients,” and denotes the amount of shrinkage.
3.2.2. Data Augmentation
The class imbalance may damage the predictive model’s performance, most of the time, in machine learning algorithms because machine learning algorithms focus more on detecting the larger classes. Our dataset has class imbalance problems, which suggest that the predictive models could poorly detect the minority class. We have tried to mitigate the class imbalance problem by augmenting 10% of our datasets’ data using the Conditional Tabular GAN (CTGAN) [50] algorithm with high fidelity. CTGAN is another GAN designed to synthesize tabular data proposed in 2019 by the same authors as TGAN [51]. As shown in Figure 2, the statistical descriptions between original data and augmented data have been given. Every value like mean, standard deviation, minimum, and maximum into original and augmented data is almost the same. It indicates that, after augmentation, the distributions of datasets remain the same.

3.3. Feature Extraction
A total of 11 features have been extracted from participants’ classified neurophysical behavior and physical activity patterns information. Four features for neurophysical behavior from our developed application and another seven physical activity features from wearable devices are shown in Table 3.
3.4. Feature Subgroup
Our analysis has explored that working and nonworking day’s features have some relationship based on the extracted features of days. So, we have divided our extracted features into three subgroups: (i) baseline, (ii) weekdays, and (iii) weekend days.
In the equation, Ēn represents the feature subgroup, n is the total number of days based on feature subgroups: baseline, n = 7; weekdays (Sunday to Thursday), n = 5; and weekend days (Friday and Saturday), n = 2. Di represents the ith day features.
3.5. Feature Selection
Feature selection is a strategy to choose optimal features from datasets. This technique improves model performance and reduces complexity and computational costs. Also, it can improve the accuracy, reduce the overfitting, speed up training, improve data visualization, and increase the explainability of the model. In this study, we have used the Pearson correlation coefficient [53] to analyze the feature. Pearson’s correlation coefficient formula iswhere r is the correlation coefficient, is the x-variable in a sample, is the mean of the values of the x-variable, is the values of the y-variable in a sample, and is the mean of the values of the y-variable.
Using Pearson’s correlation, we can generate an “r” value of individual features to rank the datasets’ significant features. This “r” value can vary between −1 and 1. Figure 3 shows the total scenario of every feature’s correlation with each other. The “” value also plays a significant role in choosing the features. Figure 4 shows the correlation with each other based on “” values. If this “” value of any feature is less than 0.05 and near to 0, that feature would be a significant feature. Our analysis has been shown from the “r” values heatmap and the “” values heatmap. As shown in Figures 3 and 4, we can observe that some features are highly correlated, while some features are less correlated. It indicates that some features would be significant, and some would not be so for our model.


Then, the wrapper feature selection method [54] has been used to select the model’s best features. Regression and a classification algorithm have been used to evaluate the selected feature’s performance after “10-fold cross-validation” of the data. Regression model features have been selected using the root mean square deviation (RMSD) of the SMMSE score estimation.
3.6. Methods
Cognitive impairment was classified into four categories (Table 3), and for evaluating the classification performance, we mainly focus on supervised learning. Two famous classification algorithms, Ensemble Learning (EL) and Support Vector Machine (SVM), were considered to detect the users’ cognitive impairment.
3.6.1. Gradient Boosting Machine (GBM)
The Gradient Boosting Machine (GBM) [55] algorithm is an advanced algorithm of Ensemble Learning (EL) algorithm. It is a supervised machine learning algorithm for regression and classification problems. It generates a prediction model, commonly decision trees. Meanwhile, a decision tree is a weak learner, and the resulting algorithm is called gradient boosted trees, which usually outperforms random forest. It creates the model sequentially as other boosting techniques do. Then, subsequence models are trying to reduce the error of the previous model. Each model reduces the error of the previous model by building the model on the error of residuals of the previous prediction. This is done to determine if there are any patterns in the errors that the previous model missed. And we repeat the same process: either the error becomes zero or we have reached the stopping criteria, which is the limit to the number of models we have built. Then, it concludes them by allowing optimization of an absolute differentiable loss function. We have given GBM working procedures step by step in a block diagram as shown in Figure 5. In a nutshell, we built our first model, which has features x and target y. And the first model was named H0, which is a function of x and y. Then, we built the next model on the error of the previous model repeatedly till the nth model, as shown in Figure 6.


H0 gives some predictions and generates error e0 by the function “F0 (X)” as shown in equation (4). Then, the next model added the new predicted errors e1 with “F0 (X)” creating a new function “F1 (X)” as shown in equation (5). Similarly, we built the next model as shown in equation (6) till the nth model.and the final equation is something like that shown in equation (7). In equation (7), “Fn-1 (X)” is the prediction by the previous model. Some new predicted errors were added to this model. Finally, we are left with some errors named en. So, at every step, we are trying to model the errors that help us reduce the overall error, and our focus is that the error tends to be zero (i.e., en = 0). Each model here is trying to boost the performance of the model. We add a coefficient “” and the proper value of this coefficient will be decided using the gradient descent technique.
The generalized equation will be like that shown in equation (8). It represents “Fn (X)” as all the previous models, n represents the coefficient, and “H(X, en)” the current working model function, where X represents the features and en means the model’s error.
If we dive deeper into equation (9), to understand about loss function and calculate n. We consider a loss function as shown in equation (10) where y is the actual value and is the predicted value for the last model. So, the square difference of this would be the loss.
In our case, the target here is y, where can be considered the updated prediction of the last model. So, we can replace with Fn (X), and the new equation will be as follows:
Here, we will use gradient descent techniques and differentiate this equation (10) with respect to Fn(X). We will get something like that shown in the following equation:
To simplify this equation (12), we will multiply both sides with “−1”. And we will get something like that shown in the following equation:
Now the right-hand side of the equation is similar to the error we are discussing. Here, we consider the error en, which is actually (y − Fn(X)). So, it can be said that en is also equal to the left-hand side of the equation. So, it can be replaced, that is, H, and our final equation will be as follows:
Now the aim is to minimize the overall loss function. So, the overall loss would be the loss we get from all the models we have built so far, as shown in the following equation:
The first part of the overall loss is fixed as these are the predictions we have generated from the previous models we built. So, this cannot be changed. The second part of this equation has another loss of the current model, and this loss cannot be changed. But we can still change the gamma value. Now it needs to select a value of gamma such that the overall loss is minimized. And this value would be selected using a gradient descent process. The idea is to minimize the overall loss by deciding the right value of gamma for each model. So, the next model, when we built that model, will again have the coefficient of n, and we try to select the right value such that the overall loss is minimum. For this, we will be focusing on a special case of gradient boosting model, which is the Gradient Boosting Decision Tree (GBDT). In this case, each of the models we built like each of these H would be a tree. There is an interesting part about GBDT; the gamma value, in this case, is calculated at every leaf level. It would be something like that shown in Figure 7. In the figure, each leaf of the tree would have a gamma value.

3.6.2. Support Vector Machine (SVM)
The Support Vector Machine (SVM) [56] is a very popular and widely used algorithm in machine learning for classification and regression [57]. It builds an intricate model as basically as conceivable, so it very well may be effectively investigated numerically. SVM sets aside a less figuring effort to recognize a hyperplane in an n-dimensional space (n being the number of features) that exclusively groups the information. The current research utilized a Sequential Minimal Optimization (SMO) algorithm with the polynomial piece to upgrade the SVM classifier model. SVM was considered for dealing with the issue of overfitting of high-dimensional information.
3.7. Tuning The Model
Our model has been tuned with some hyperparameters to set some customized values to improve our model performance. In this case, we have selected “alpha” to 1.0, “criterion” to friedman_mse. The “n_estimators” is set to 32, which will create 32 DTs within GBM. The “learning rate” is set to 0.1, which determines each tree’s impact on the outcome. The “random state” is set to 96; it is a random number seed so that the same random numbers are generated every time. The “colsapmle_bytree” is set to 0.7, which works for random feature selection at the tree. The “max_depth” is to 6; it is a stopping criterion (i.e., a maximum depth to which a tree can grow).
3.8. The Model Evaluation and Validation
We have used conventional machine learning algorithms to analyze our participants’ neurophysical conditions in this study. We have given accuracy, precision, recall, F1-score, and ROC curve; those are employed as evaluation metrics in our experiments to represent our work contribution. We divided our dataset into two parts: two-thirds of the datasets for the training process, and another one-third was for the testing process. To validate the model, we applied 10-fold cross-validation with a 5∗2 approach on the dataset. First, the dataset was divided into two halves randomly. Second, one part was employed in training and another in testing, and we repeated the same procedure as vice versa. This procedure was applied five times repeatedly. Finally, we averaged the results and generated a projected score and compared it with the actual score. This cross-validation procedure has the advantage that all data are used for both validation and training. We have presented a graph comparing the generated score with the actual score in the Results section. The root mean square deviations (RMSDs) have been used to calculate the error between the estimated score (Ēesc) and the actual score (Ēiesmmse). The RMSD value defined the model performance and has been calculated as follows:
4. Results
4.1. SMMSE Score Prediction
From participants’ neurophysical behavior and physical activities pattern, 11 features have been extracted and divided into three subgroups. The linear regression model has been used to estimate each feature’s corresponding cognitive impairment score. According to the cognitive impairment levels’ score, four groups have been categorized (normal (SMMSE score 25), mild (21 SMMSE score 24), moderate (10 SMMSE score 21), and severe (SMMSE score 9)). Each subgroup’s feature data distribution has a relationship with cognitive impairment symptoms. It has also been found that seven features have a high correlation with cognitive impairment symptoms as their “” values are less than 0.05 to close to 0, as discussed in Section 3.5. These seven features are total time (TT), error number of words (ENW), average time (AVG), absolute energy (AE), quality sleeping time (QST), walking step (WS), and heart pulse data (HPD).
Table 4 represents the relationship between the regression model estimated SMMSE score and the actual score, evaluated using RMSD. The error has been minimized using the lasso regularization method, as discussed in section 3.2.1. Each of the features’ results shown in Table 4 is calculated by using leave-one-out cross-validation. A subset has been selected using the wrapper feature selection method among 33 features from three subgroups (base, weekday, and weekend), and this technique shows the lowest RMSD of 3.125. This value demonstrates that the predicted SMMSE score has stronger correlations with the actual SMMSE score.
4.2. Cognitive Impairment Level Detection
As shown in Figure 8, data distribution analysis demonstrates that the features are highly distributed, with high standard deviation and too many outliers in features. A rule-based algorithm like “decision trees” or “ensemble learning” should work efficiently for these kinds of feature datasets. In this regard, we have the Gradient Boosting Machine (GBM), an ensemble learning algorithm. For evaluating and proving our selection, we have experimented with a distance-based algorithm, the “support vector machine (SVM)” also. Table 5 represents the overall accuracy of our used models. We can see that the “gradient boosting machine (GBM)” has the highest accuracy of 94.8%.

In terms of the four cognitive levels—(i) normal, (ii) mild, (iii) moderate, and (iv) severe—the classification algorithm results are shown in Table 6, where classification performance has been demonstrated by showing the results of accuracy of the individual classifier as well as individual classes. We can easily decide on an excellent algorithm to analyze Table 6 as the particular classifier precision, recall, F1-score, and accuracy have been given. The accuracy result has shown that GBM generally has done an excellent performance compared with other classification algorithms. In the “normal” class, the SVM accuracy level looks slightly good, but GBM performed well on all four cognitive impairment levels.
The GBM classifier’s performance using the receiver operating characteristic (ROC) curve has been shown in Figure 9. As shown in Figure 9, by considering the “normal” cognitive impairment as a negative test sample, the ROC curve in terms of cognitive impairment reached a maximum true positive rate of approximately: (i) 99% for mild cognitive impairment, (ii) 96% for severe cognitive impairment, and (iii) 94% for moderate cognitive impairment. In Figure 10, we have shown the random 15-day data of our participants for every cognitive impairment class. This figure demonstrates and validates the accuracy of the model. The mild (21 SMMSE score 24), moderate (10 SMMSE score 21), and severe ( SMMSE score 9) levels have a range, and if the value is within range, we have counted as an actual prediction, otherwise false prediction.


(a)

(b)

(c)
5. Discussion
This study used keystroke pattern data and smart wearable device data to extract information about our participants’ neurophysical behavior and physical behavior patterns. We have used the “10-fold-cross-validation” with a 5∗2 approach to validate our model. The model can detect four different cognitive impairment levels (i.e., normal, mild, moderate, and severe) with 94.8% accuracy. This accuracy is higher than that in a previous study, which recorded an accuracy rate of 86% [58]; however, this study focused on predicting dementia and mild cognitive impairment. Our extracted features from our developed application and wearable devices data have shown a strong correlation with the SMMSE score and are found in the regression model.
The study by Vizer and Sears [28] was based on typed text’s keystroke and linguistic features to detect cognitive impairment. Some researchers like Sofi et al. [59] did a meta-analysis on physical activity. In the present study, we have combined keystroke pattern behavior with our participants’ physical activity to detect cognitive impairment.
In our study, using the “Pearson” correlation for feature analysis and wrapper method to select the features has done a great job to achieve higher accuracy classification performance for each cognitive impairment level (normal, mild, moderate, and severe). To evaluate the classification performance, we have used two popular classification algorithms: GBM, SVM, and the GBM have shown better performance and higher accuracy in every cognitive level.
Limitations. Although the model used in this research predicted cognitive impairment level with high accuracy, there are some limitations when interpreting the results. This research did not assess a clinically cognitive impaired population because the sample only comprised older adults. The assessment used to evaluate cognitive impairment was a self-report scale called the SMMSE rather than a clinical evaluation. Typing errors, taking a long time to complete sentences, or being unable to remember words might be critical factors for cognitive impairment. Some physical issues can be related to what this research tried to find out. But noncognitive impaired people might have those same problems for several reasons. Besides, some participants may not have followed our instructions carefully, which would make some data errors. Like answering misconceptions, participants may not always wear smartwatches, etc.
6. Conclusions
Machine Learning (ML) innovation holds noteworthy guarantees for changing how we determine and treat patients with neurocognitive disorders. There exists an enormous assortment of potential highlights that in a mix can exhaustively describe the biopsychosocial determinants of an exceptional individual and consequently empower a more customized comprehension of intellectual decay. ML calculation presentation and potential clinical utility for distinguishing, diagnosing, and predicting psychological decline utilizing these highlights will keep on improving as we influence multifeature datasets on massive datasets. Setting up rules for research, including AI applications in medical services, will be essential to guarantee the nature of results and clinicians’ commitment, besides allowing patients and their caregivers to contribute their ability to refine AI calculations. This study demonstrated the capability to passively detect cognitive impairment symptoms by monitoring daily physical activities and keystroke patterns. Given that the detection of cognitive impairment level is not dependent on traditional self-report psychometric instruments, such a method may improve the identification of cognitive impairment. Early detection of these progressions can allow for interventions that can lessen, delay, or thwart related functional impairments. Therefore, more effective techniques that support the early detection of cognitive changes, mostly solutions that continuously leverage normal daily activities, could significantly impact older adults’ health and independence. Given the connection between cognitive processes expected to utilize innovation and those affected by cognitive impairment and stress, this examination will investigate keystroke and physical attributes of unexpectedly composed content as a potential methodology for checking cognitive changes. This methodology has a few points of interest over conventional techniques for observing cognitive function. Therefore, the proposed model in this study can examine the totality of the data not just at specific stages. It is subtle and assembles standard information for examination and finding just as constant information for everyday monitoring.
Data Availability
The datasets in this study are collected from users as a part of this study. Thus, these can be shared upon request.
Conflicts of Interest
The authors declare that they have no conflicts of interest regarding the publication of this paper.
Acknowledgments
This research was supported by the Ministry of Trade, Industry & Energy of the Republic of Korea, as IoT Home Appliance Big-Data Utilization Support Project (A0080517000103) and conducted under a research grant from Kwangwoon University in 2021.