Abstract

In recent years, Internet of Things (IoT) and advanced sensor technologies have gained considerable interest in linking different medical devices, patients, and healthcare professionals to improve the quality of medical services in a cost-effective manner. The evolution of the smart healthcare sector has considerably enhanced patient safety, accessibility, and operational competence while minimizing the costs incurred in healthcare services. In this background, the current study develops intelligent energy-aware thermal exchange optimization with deep learning (IEA-TEODL) model for IoT-enabled smart healthcare. The aim of the proposed IEA-TOEDL technique is to group the IoT devices into clusters and make decisions in the smart healthcare sector. The proposed IEA-TEODL technique constructs clusters using the energy-aware chaotic thermal exchange optimization-based clustering (EACTEO-C) scheme. In addition, the disease diagnosis model also intends to classify the collected healthcare data as either presence or absence of the disease. To accomplish this, the proposed IEA-TODL technique involves several subprocesses such as preprocessing, K-medoid clustering-based outlier removal, multihead attention bidirectional long short-term memory (MHA-BLSTM), and weighted salp swarm algorithm (WSSA). The utilization of outlier removal and WSSA-based hyperparameter tuning process assist in achieving enhanced classification outcomes. In order to demonstrate the enhanced outcomes of the IEA-TEODL approach, a wide range of simulations was conducted against benchmark datasets. The simulation results inferred the enhanced outcomes of the IEA-TEODL technique over recent techniques under distinct evaluation metrics.

1. Introduction

With the advancements made in smart sensorial media, Internet of Things (IoT), and cloud techniques, smart health care has gained considerable interest in different domains such as healthcare, academia, government, and industry [1]. In recent times, Internet of Things (IoT) has brought the vision of a smart world into reality, with numerous services in the pipeline generating massive amounts of data. Cloud computing (CC) suits well as an enabling technique since it presents a flexible stack of software, computing, and storage services at a lower cost [2]. Cloud-based service has the potential to provide a high-quality seamless experience to clinicians, physicians, and other caregivers, anytime and anywhere. While research has been making advances in cloud services and IoT separately, minimum attention has been paid to emerging, affordable, and cost-effective intelligent healthcare services [3]. At present, cloud and IoT technologies have assisted in delivering smart healthcare services on a real-time basis and also have made considerable improvements.

With the integration of the IoT cloud, a great demand for intelligent and smart healthcare systems provides a rapid and seamless response. Artificial intelligence (AI) and deep learning (DL) techniques can improve decision-making and cognitive behaviour [4]. Advanced electronic applications are presented to intelligent healthcare stakeholders along with smart sensor devices. In spite of these, it is challenging to access or find hospitals and medical professionals in intelligent healthcare environments. In general, patients with serious medical needs must be provided quick attention and faster response in order to save their lives [5]. Therefore, data recorded from patients needs to be interpreted and transferred to healthcare professionals with minimum delay while the results need to be sufficiently accurate so that it can be utilized by healthcare experts for disease prognosis. Hence, a smart healthcare system is required that could resolve the above-mentioned problems and leverage the technology and services available in the intelligent healthcare environment. Figure 1 illustrates the structure of a smart healthcare system.

Though there have been advancements in this domain, the concept of a smart healthcare system remained uncertain without cognitive function. Smart city service can never be exploited completely without the cognitive knowledge of its stakeholders [6]. Even though the conventional methods achieve rapid delivery of results, it is expected to obtain highly accurate results. But, most of the time, the results suffer from complex data [7]. In this situation, high accuracy can be accomplished by deep learning (DL) techniques and its different versions. In literature, these techniques are trained using large datasets [8]. DL method is an emerging field that has gained considerable outcomes in sequence prediction, mixed-modality data sets, and natural language processing tasks that have received heavy growth in various applications such as computer vision and speech recognition [9, 10].

The current article develops intelligent energy-aware thermal exchange optimization with deep learning (IEA-TEODL) model for IoT-enabled smart healthcare. The proposed IEA-TEODL technique derives energy-aware chaotic thermal exchange optimization-based clustering (EACTEO-C) scheme. Besides, a disease diagnosis model is also involved to classify the collected healthcare data into either presence or absence of the disease. To accomplish this, the proposed IEA-TODL technique involves several subprocesses such as preprocessing, K-medoid clustering-based outlier removal, multihead attention bidirectional long short-term memory (MHA-BLSTM), and weighted salp swarm algorithm (WSSA). In order to validate the promising performance of the IEA-TEODL technique, a wide range of simulations was performed against benchmark datasets, and the results were validated under different measures.

2. Literature Review

Mansour et al. [11] developed a disease diagnosis system for diabetes and heart disease using IoT and AI convergence methods. The presented technique employed crow search optimization approach-based cascaded LSTM (CSO-CLSTM) for disease diagnoses. To accomplish improved classification of healthcare information, CSO was employed for tuning “weights” and “bias” parameters of the presented approach. The authors in the literature [12] developed a cloud-centric IoT-based m-healthcare monitoring disease diagnosis system that predicts the possible disease occurrence with the severity level. In this study, key terminology was determined to generate user-based health measurement by examining computation science concepts.

In literature [1315], the authors presented a disease diagnosis system with DL as well as IoT. The healthcare information is preprocessed since it contains noise. The preprocessed information is then passed onto isolation forest (iForest) for outlier recognition with high precision and linear time complexity. The data undergo a classification method in which DenseNet169 and PSO methods are incorporated to diagnose the disease; the parameter is then tuned to improve the performance. Awotunde et al. [16] developed an IoT-WBN-based architecture with an ML approach. The data collected from wearable sensors such as glucose sensors, body temperature, chest, and heartbeat sensors are transferred by IoT device to the cloud dataset.

Nagarajan et al. [17] designed an IoT-based FoG-enabled cloud network framework that accumulates real-time healthcare information from patients through a number of healthcare IoT sensor networks. This information is examined by the DL technique deployed in a fog-based healthcare environment. Moreover, the presented approach was utilized in sustainable smart city solutions to estimate real-time process. Ihnaini et al. [18] proposed an intelligent healthcare system for diabetes based on deep ML and data fusion perspectives. With data fusion, the unrelated burden of computation abilities was removed, and the presented system’s efficiency in terms of recommendation and prediction of this severe disease, in a precise format, was increased. At last, the ensemble ML approach was trained for predicting diabetes.

3. The Proposed Model

In this study, a novel IEA-TEODL technique has been developed to accomplish clustering and decision-making in an IoT-enabled smart healthcare environment. The proposed IEA-TEODL technique follows 2-stage processes, namely EACTEO-C-based cluster construction and optimal DL-based disease classification. The detailed working process of these two modules is elaborated in the succeeding subsections. Figure 2 displays the block diagram of the IEA-TEODL technique.

3.1. Process Involved in EACTEO-C Technique

In the primary stage, the IoT devices are placed in the healthcare environment to gather medical data from the patients. In order to achieve effectual energy utilization and data transmission to the cloud server, the EACTEO-C technique is executed to select the cluster head (CH) and construct it.

3.1.1. Overview of CTEO Algorithm

The primary aim behind the adaption of a meta-heuristic approach named thermal exchange optimization (TEO) is to cluster the nodes. The model of temperature from TEO reflects the interface feature of nodes [19]. The cooling object mentions the place of nodes whereas the environmental temperature signifies the adjacent nodes. The object is considered as a sensor node. Therefore, important nodes are either interpreted as objects or conversely.

The primary temperature of every node is defined as follows:where refers to the primary solution vector of the node, . and signify the limits of temperature variables. In addition, stands for arbitrary vector, whereas all the components are in the range of zero and one. The main function computes the cost value of all the nodes. The memory has regarded that hierarchy holds the optimum vector, and the main function value is connected to these vectors. It improves the technical performance with no increase in computational cost. In this way, a thermal memory (TM) is utilized to save several optimum solutions at the moment. So, during this phase, solution vectors, stored from , are transmitted to populations. In addition, a similar amount of accessible worse nodes is not assumed. Eventually, the node is sorted in an ascending order based on its respective main function values. The node is divided into two equivalent groups. For instance, is an environment object for cooling object and conversely.

Generally, if the value of object is lesser, it somewhat modifies the temperatures. An analogy is simulated as this feature is projected. The value of all the nodes is calculated based on equation (2). Therefore, the value of lesser cost node remains a minimum value, and somewhat it modifies the node place.

The time is dependent upon the number of iterations. denotes the time value for all the nodes and is computed as follows:where and demonstrate the present and maximal number of iterations correspondingly. The environment temperature is replaced by equation (4). At this point, and denote control variables.

refers to the previous temperature of the node modified to .(i) is recognized to decrease arbitrariness when approaching the final iteration. While the procedure is nearing the end, improves and reduces the production of arbitrariness in a linear fashion.(ii) checks the size of arbitrary steps. Besides, contains arbitrariness if it does not utilize a descending method .(iii) controls . That is, where a decrease is not needed, this could be regarded as equivalent to zero.

Where the condition of , the preceding temperature is multiplied by and and are chosen in { or 1}. With the preceding stages and equation (4), the upgrade temperature of all the nodes is defined based on equation.

parameter from defines whether the element of all the nodes is replaced. To all the nodes, is related to and is an arbitrary number that is equally distributed from zero and one. If , a dimension nodes, is arbitrarily selected, and their values are redefined as follows:where refers to the variable of node . and imply lower as well as upper limits of the variable correspondingly. Only one size is altered to preserve the infrastructure of nodes. This method presents many benefits to nodes for moving throughout the searching region and attaining the optimum diversity.

In this work, the TEO algorithm can be improved with the design of the CTEO algorithm using chaotic concepts [20]. A chaos map employs chaotic variables with changeable nature before arbitrary variables. This order is initiated from nonlinear and dynamic systems whereas nonconvergent orders are from nonperiodic and bounded systems. It can offer easy searching together with a superior convergence rate than arbitrary search. This process uses the technique for providing the optimum exploration from solution spaces due to their dynamic performance of turbulence sequence. The current analysis utilizes a sinusoidal chaotic map function to improve both convergence speed and premature convergence of the TEO technique so as to consider a trade‐off between exploitation as well as exploration techniques. This is performed to provide a well-defined outcome from the solution space which does not stuck at the local optimum points. In order to modify the TEO approach with the help of a chaos map, the chaos value is replaced with arbitrary numbers using the important formula as follows:where defines the control parameter, and and imply the chaotic arbitrary numbers generated from preceding and the existing iterations correspondingly. At this point, and .

3.1.2. Application of EACTEO-C Technique for CH Selection

The primary goal of the EACTCO-C technique is to minimize the distance among the carefully chosen CH nodes. The main objective is to minimize the delay during the transmission of information from one node to another. In contrast, for the network energy should be higher, it should consume a small number of energies at the time of data communication. The objective function of the adapted CH is given in equation (7), where the value of must depend upon . Now, and show the operations as given as follows. The constraints on distance, delay, and energy are stated as , , and . The condition of this constraint is represented by . represents the distance between normal and sink nodes.where represents the packet transmission from the normal node to CH and from CH to . must depend upon . The value of remains high when the normal node is more along with distance among CH [21].

denotes the normal node in xth cluster, represents the CH of xth cluster, the distance between the BS and CH is shown as , represents the distance between normal node and CH, and shows the distance among two normal nodes, and indicate the node amount that does not assume xth and yth cluster. The value of becomes higher than one, and the whole CH cumulative and is considered as less energy value with high number of s.

Delta fitness function is directly proportionate to each node that resides in the cluster. Thus, a delay gets reduced, when the CH owns a lesser number of nodes. The denominator shows the overall number of nodes in WSN, and the numerator indicates the high amount of . Furthermore, the value of must be in d.

3.2. Disease Diagnosis Module

In this work, the disease diagnosis model encompasses a series of subprocesses, namely preprocessing outlier removal, MHA-BLSTM-based classification, and WSSA-based hyperparameter optimization.

3.2.1. Data Preprocessing

At the initial stage, preprocessing takes place in different ways, namely data normalization, data transformation, and data augmentation. In this work, min-max normalization approach is used to normalize the input medical data. Besides, data are also transformed into a useful format, and data augmentation is applied using SMOTE technique to increase the size of the dataset.

3.2.2. K-Medoid Clustering

Next to data preprocessing, the outlier removal process is carried out using the K-medoid clustering approach. The K-means approach that utilizes and determines the means of data point in the calculation is mainly sensitive to the outlier. To resolve this, a new approach was developed in which the medoids are utilized rather than the average value from the cluster. Medoids are centre points from the cluster, and the approach is named as k-medoids clustering. Even though k-medoids computationally increase their demands, the k-medoids cluster is not mainly sensitive to the existence of outlier points and is appropriate to discrete and continuous fields of information [22]. Generally, the input provided has the value of k that denotes the amount of clusters determined to data. For every k cluster, a k-reference point is chosen. The variance between k-medoids and k-means algorithms is that the former k-medoids considers the point as a reference object for the cluster whereas k-means considers the average value from the former k-medoid cluster as the reference point.

3.2.3. Data Classification Using MHA-BLSTM Model

During the data classification process, the MHA-BLSTM model can be employed for the classification process. RNN is a well-known technique to train the series data, namely image processing, video capture, and word prediction that could remember the series element using a memory cell. The main problem of handling RNN is that once it is utilized for training with long step size, it cannot remember the data for a longer period since the backpropagated gradient either shrinks or grows at every time step. This makes the training weight vanish or explode. LSTM memory overcomes this problem while a standard LSTM unit consists of input, output, and forget gates that control the data into and out of the memory cell. The structure of a single LSTM cell includes the logistic sigmoid function whereas , , , and represent the input gate, forget gate, output gate, and cell state, correspondingly. The input gate determines the ratio of input and has an impact on the value of the cell state [23]. The framework could resolve the exploding and vanishing gradient problems.

Figure 3 demonstrates the framework of Bi-LSTM. Bi-LSTM has both forward and backward LSTM layers. The forward layer captures the historical data of order while the backward layer captures the future data of the sequences. The combined layers are linked to a similar resultant layer. Our network utilizes Bi-LSTM with a multihead (MH) process. MH permits the model for combined data to appear in various representations of subspaces at distinct places. The attention process plays a vital role in the DL network to capture the explicit and latent context. MH attention process is presented since it utilizes several individual attention functions to capture distinct contexts. The attention function gets input as an order of query and group of key-value pairs . MH attention method primary transforms Q, K, and V to C subspaces, with distinct and learnable linear projection.

At this point. , , and signify the head of query, key, and value correspondingly. implies the parameter matrices, and and stand for models and their subspace dimensions. Moreover, attention functions are executed concurrently to obtain the resultant state, .

Ac implies the attention distribution, formed by attention head. These resultant states are concatenated to produce the last state.

3.2.4. Parameter Tuning Using WSSA Technique

In order to fine-tune the parameters involved in the DL model, the WSSA technique is used which in turn improves the classifier results. The SSA approach is stimulated from navigation behaviour of salps in search of food in the ocean [24]. It is classified as leader and follower. In the searching method of optimization technique, it is important to balance the exploration and exploitation capabilities to accomplish better efficiency. The idea of inertia weight factor is initially presented to quicken the convergence speed. Researchers find that when inertia weight is lesser, the particle has stronger exploitation capability. However, it easily falls into local optima. In contrast, when inertia weight is larger, the particle still has a stronger exploration ability; however, the searching efficacy becomes low. Furthermore, the researcher presented the inertia weight factor to enhance the searching method. Here, the weight factor reduces linearly to balance between exploration and exploitation ability; thus, the particle has a stronger global searching capability in the earlier stage and searches for the precise outcome in the later stage. In the current study, to enhance the outcomes from traditional SSA, a weight factor is included to update the position. It changes dynamically with the number of iterations [25]. The weighted factor decreases linearly with the number of iterations from maximum to minimum values to accomplish optimal outcomes.whereas and denote the maximal and minimal values of the weighted factors, represents the present iteration, and indicates the maximal iteration. The position is upgraded in WSSA for leader and follower and is modelled as follows:whereas the variable has a similar meaning as in SSA.

WSSA approach derives a fitness function to accomplish better classification accuracy. It describes a positive integer to characterize the improved accuracy of the candidate solution. Here, the minimization of the classification error rate is taken into account as the fitness function. The optimum solution has the least error rate whereas the worst solution achieves an increased error rate.

4. Experimental Validation

In this section, the proposed IEA-TEODL model is experimentally validated for its performance using a heart disease dataset [26]. It comprises of 270 samples with 13 attributes such as age, sex, chest pain value, resting blood sugar, serum cholesterol, fasting blood sugar, resting electrocardiographic results, maximum heart rate achieved, exercise-induced angina, old peak, slope of peak exercise, number of major vessels, and thal. Besides, the dataset includes two class labels, namely the presence of CKD and the absence of CKD.

4.1. Results Analysis

Table 1 and Figure 4 provide the overall results of the analysis of the IEA-TEODL model on the heart disease dataset under three runs. The results demonstrate that the proposed IEA-TEODL model accomplished an effectual classification outcome under all runs. For instance, with run-1, the IEA-TEODL model achieved a of 98.76%, of 93.09%, of 91.27%, and an of 95.61%. Along with that, with run-2, the proposed IEA-TEODL approach accomplished a of 98.21%, of 92.56%, of 94.19%, and an of 94.16%. In line with these, with run-3, IEA-TEODL methodology offered a of 99.15%, of 96.32%, of 95.92%, and an of 99.33%.

Figure 5 depicts the ROC curve generated by the IEA-TEODL approach under three runs. The figure exposes that the proposed IAOA-DLFD technique reached an enhanced outcome with maximum output under different runs. For the sample, with run-1, the proposed IEA-TEODL methodology obtained a high ROC of 97.0602. Likewise, with run-2, the IEA-TEODL algorithm obtained an enhanced outcome (ROC) of 97.4922. Eventually, with run-3, the proposed IEA-TEODL system achieved an increased ROC of 98.4221.

Figure 6 provides the accuracy and loss graph analysis results accomplished by the IEA-TEODL approach under three runs. The outcomes show that the accuracy value increased while the loss value decreased with an increase in epoch count. It can be also understood that the training loss is low, and validation accuracy is high under three runs.

4.2. Discussion

A brief analysis was conducted on the IEA-TEODL model against existing ones, and the results are shown in Table 2 and Figure 7. The results report that the proposed IEA-TEODL model achieved better outcomes in terms of under distinct instances. For instance, with 2000 instances, IEA-TEODL model reached an increased of 96.58%, but NN approach, NB methodology, SVM system, and ANN models obtained reduced values such as 93.55%, 87.97%, 83.16%, and 95.33% correspondingly. In addition, with 10000 instances, the proposed IEA-TEODL model reached an increased of 99.15%, while NN approach, NB methodology, SVM system, and ANN models obtained reduced values such as 93.47%, 88.26%, 84.21%, and 98.70%, respectively.

A comparative analysis was conducted on the IEA-TEODL model against existing ones, and the results are shown in Table 3 and Figure 8. The results report that the proposed IEA-TEODL approach achieved better outcomes in terms of under various instances. For instance, with 2000 instances, IEA-TEODL approach reached an increased of 95.40%, whereas NN approach, NB methodology, SVM system, and ANN models obtained the least values such as 84.86%, 83.71%, 80.93%, and 94.36% respectively. Furthermore, with 10000 instances, the proposed IEA-TEODL technique reached an increased of 96.32%, whereas NN approach, NB methodology, SVM system, and ANN methodologies obtained less values such as 90.26%, 86.91%, 84.13%, and 91.90% correspondingly.

A detailed analysis was conducted on the IEA-TEODL algorithm against existing methods, and the results are shown in Table 4 and Figure 9. The results report that the proposed IEA-TEODL technique achieved better outcomes with respect to under distinct instances. For instance, with 2000 instances, the proposed IEA-TEODL model attained an increased of 94.28%, but NN approach, NB methodology, SVM system, and ANN systems obtained less values such as 88.73%, 77.43%, 73.17%, and 92.54% correspondingly.

Additionally, with 10000 instances, the proposed IEA-TEODL approach reached the maximum of 95.92%, whereas NN approach, NB methodology, SVM system, and ANN models obtained low values, namely 89.61%, 82.02%, 81.98%, and 93.88% correspondingly.

A brief analysis was conducted between the IEA-TEODL method and the existing models, and the results are shown in Table 5 and Figure 10. The results infer that the proposed IEA-TEODL approach achieved better outcomes in terms of under distinct instances. For instance, with 2000 instances, the presented IEA-TEODL model reached the maximum of 98.32%, while NN approach, NB methodology, SVM system, and ANN algorithms obtained low values such as 92.33%, 84.63%, 81.59%, and 97.67% correspondingly. Finally, with 10000 instances, the proposed IEA-TEODL algorithm obtained an increased of 99.33%, whereas NN approach, NB methodology, SVM system, and ANN models reached less values such as 97.71%, 84.25%, 82.32%, and 95.84% correspondingly.

At last, a brief TEC examination was conducted between IEA-TEODL model and recent methods, and the results are shown in Table 6 and Figure 11 [27]. The experimental values highlight that the proposed IEA-TEODL model produced effective TEC values under distinct IoT sensor counts. For instance, with 100 IoT sensors, the IEA-TEODL model gained a low TEC of 41.30%, whereas EE-PSO, ABC, GWO, and ACO algorithms obtained high TEC values such as 45.04%, 57.14%, 60.65%, and 66.16%, respectively. At the same time, with 300 IoT sensors, the proposed IEA-TEODL method gained a low TEC of 57.71%, whereas EE-PSO, ABC, GWO, and ACO systems obtained high TEC values such as 59.73%, 67.24%, 73.44%, and 77.15% correspondingly. In line with this, with 500 IoT sensors, the proposed IEA-TEODL model gained a low TEC of 65.74%, whereas EE-PSO, ABC, GWO, and ACO approaches attained high TEC values namely 69.28%, 78.51%, 82.11%, and 84.08% correspondingly.

After examining the above-mentioned tables and figures, it is apparent that the proposed IEA-TEODL technique outperformed other methods. The enhanced performance of the proposed model is due to the integration of EACTEO-C-based cluster construction and optimal DL-based disease classification.

5. Conclusion

In this study, a novel IEA-TEODL technique has been developed to accomplish clustering and decision-making in an IoT-enabled smart healthcare environment. The proposed IEA-TEODL technique follows a two-stage process, namely EACTEO-C-based cluster construction and optimal DL-based disease classification. Besides, the disease diagnosis model encompasses a series of subprocesses, namely preprocessing outlier removal, MHA-BLSTM-based classification, and WSSA-based hyperparameter optimization. In order to validate the promising performance of the proposed IEA-TEODL technique, a wide range of simulations was conducted against benchmark datasets. The simulation results established the enhanced outcomes of the IEA-TEODL technique over other recent techniques under distinct evaluation metrics. Thus, the IEA-TEDOL technique can be utilized as an effectual tool to accomplish energy efficiency and data classification in an IoT environment. In the future, lightweight cryptography and authentication mechanisms can be included to assure security in the smart healthcare environment.

Data Availability

Data sharing is not applicable to this article as no datasets were generated during the current study.

Not applicable.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research work was funded by Institutional Fund Projects under grant no. (IFPFP-273-22). Therefore, the authors gratefully acknowledge technical and financial support provided by Ministry of Education and Deanship of Scientific Research (DSR), King Abdulaziz University (KAU), Jeddah, Saudi Arabia.