Abstract

Cloud computing is a technology that allows dynamic and flexible computing capability and storage through on-demand delivery and pay-as-you-go services over the Internet. This technology has brought significant advances in the Information Technology (IT) domain. In the last few years, the evolution of cloud computing has led to the development of new technologies such as cloud federation, edge computing, and fog computing. However, with the development of Internet of Things (IoT), several challenges have emerged with these new technologies. Therefore, this paper discusses each of the emerging cloud-based technologies, as well as their architectures, opportunities, and challenges. We present how cloud computing evolved from one paradigm to another through the interplay of benefits such as improvement in computational resources through the combination of the strengths of various Cloud Service Providers (CSPs), decrease in latency, improvement in bandwidth, and so on. Furthermore, the paper highlights the application of different cloud paradigms in the healthcare ecosystem.

1. Introduction

Cloud computing has made the dream of scalable computational resources real and is now on the verge of being considered in several usage models. The IT domain recognizes cloud computing as an emerging technology because it finds application in all disciplines. The prominent roles of cloud computing include hosting and delivering diverse software and services using the Internet [1, 2]. Cloud computing is a critical infrastructure for many organizations. After more than ten years of development, cloud computing has achieved great success and has significantly modified the history of the economy, society, industries, and science. With the fast development of mobile Internet and big data technology, most online services and data services are built on top of cloud computing. Thus, cloud computing has found applications in business, education, marketing, and medical and research fields [3].

Rather than providing a product via the Internet, cloud computing also provides IT solutions as a service. More than $1 trillion has been invested in cloud computing systems actively or passively. Leading CSPs such as Amazon, Microsoft, Salesforce, and Google are in intense competition [4] in terms of number of clients, reliability, and innovative service delivery.

With its unique properties such as rapid elasticity, on-demand self-service, and resource pooling, the cloud allows clients to rent online IT resources, platforms, and software services when necessary. Thus, cloud customers can integrate their business applications on a pay-per-use basis, store data, and process and run analytics through the Internet [5].

Besides cloud computing technology, another paradigm has emerged by allowing a single CSP to grow beyond capacity. Due to the tremendous demand for online businesses requests, a single CSP can be overwhelmed so that its computing capacity becomes inadequate to fulfil the customer’s requirement. This issue has led CSPs to put their resources together based on Service Level Agreement (SLA) to ensure that the Quality of Service (QoS) meets customer’s requirement. An architecture called “Cloud Federation or Inter Cloud or Federated Cloud” is advantageous in terms of interoperability and financial benefits for both the customer who needs computational resources and the CSPs that, apart from having extra resources, need to keep their customers [6].

IoT devices generate and deliver a massive amount of data for analysis and decision-making. Applications such as remote surgery and smart cities require minimal response time. Therefore, it has become critical to bring computing power to the edge of the network. Both edge and fog computing efficiently help to obtain a certain satisfaction in terms of quality of service (in IoT applications that demand low latency), high bandwidth, and mobility service that a cloud environment itself cannot offer due to its centralized nature [79].

This paper reviews evolutionary trends in cloud computing (beginning from federated cloud through edge computing and finally to fog computing) by developing and describing the primary concept around each technology and the revolution that each technology has brought to the IT domain. We also discuss the novelty in each technology application and present the opportunities and challenges of these technologies to guide research in the healthcare ecosystem. This is not to downplay the relevance of cloud computing in other application domains such as self-driving cars, autonomous air vehicles, nuclear power plant control, and air traffic management [10].

Safety-critical systems are present in medical applications and devices such as heart-lung machines, mechanical ventilation systems, radiation therapy machines, and robotics surgery machines. All of these machines must meet stringent requirements [11]. In addition, other safety-critical systems are exhibited in other healthcare-related technological areas such as fire alarm and life support systems. These systems use either the cloud or one of its related aforementioned technologies to process the large amount of data generated by various IoT sensors.

The contribution of this review is the provision of good understanding of the evolutionary trends in cloud computing by illustrating each technology using the healthcare ecosystem as a case study. An overview of the different evolutionary trends with specific focus on the health industry, including the drawbacks and opportunities available in each paradigm, is fully presented in this work.

There have been substantial contributions in terms of literature survey of emerging trends of cloud computing. This section presents related works in literature survey of cloud computing vis-à-vis their limitations.

A detailed fog computing architecture together with its levels and a survey of the various computing paradigms and features are discussed in [12]. The paper presents an in-depth analysis of fog computing and a systematic analysis of the challenges of fog computing in relation to IoT. However, this work presents neither the application of fog computing to show the opportunities it brings to the IT domain nor the application of the related paradigm that may lead the end-users to opt for fog computing in various fields.

In [13], the authors present the architecture, characteristics, challenges, and need of fog computing. However, it does not relate fog computing to other cloud paradigms to show how the technology fits into user requirements as against the other paradigms. This work also does not present the application of fog computing in various domains.

The benefits, problems, and drawbacks of edge-to-cloud computing are discussed in [14]. It also looked at edge architecture and other applications suitable for current and future edge and cloud computing opportunities. However, the work does not consider the overall trends of cloud computing as a whole. Consequently, no information on the opportunities and challenges of the emerging trends of cloud computing was presented in the work.

A review of IoT technology and its impact on big data, followed by a demonstration of how fog computing, as a new approach, may aid IoT expansion was undertaken in [15]. The paper presented the integration of big data with IoT in various applications and the advantages and drawbacks of fog computing.

The authors in [16] proposed a taxonomy of tangible indicators for evaluating cloud, fog, and edge computing performance. The authors conducted a literature review to identify common indicators and applications. The open challenges these paradigms bring in the evolution were discussed. Unfortunately, this paper did not present the utility of the emerging trend as an incredible tool that brings tremendous opportunities to the IT domain.

Thus, to address some of the highlighted gaps in the foregoing works, the study at hand provides a comprehensive survey of the emerging trends of cloud computing with specific emphasis on the healthcare ecosystem, which is a safety-critical domain. Safety-critical systems are systems where time delay, security breach, lack of computational resources or resource availability, and so forth can lead to catastrophe. The drawbacks and opportunities of these safety-critical systems as applied to the trends in cloud computing are also discussed. We also highlight the open challenges from both the end-users and CSPs perspectives in order to guide future research in the field.

3. Cloud Computing Overview

This section presents, in general, the cloud computing paradigm and its architecture and illustrates its utilization in healthcare.

3.1. Cloud Computing Architecture

Cloud computing as an on-demand delivery and pay-as-you-go technology provides scalable, flexible, and manageable resources by virtualizing existing resources [1720]. Cloud computing is composed of different deployment models that declare how customers access cloud services. These models are Private Cloud, Public Cloud, Hybrid Cloud, and Community Cloud. Criteria such as ownership, scale of the system, regulation of the infrastructure, and where the infrastructure resides distinguish the implementation of each model [2124].

After establishing the cloud, its services are deployed in business models that may vary depending on the user’s specifications. The cloud service model is mainly divided into three types, namely, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) [2527].

Cloud computing is also called the “Layered Computing Model” [28]. Its architecture can be spliced into two components, which are front end and back end. The front end represents what the end-users see, while the back end describes what the end-users cannot see, which is otherwise called the cloud section of the system [29]. On the other hand, the architecture of cloud computing can be divided into four different layers, which are the (i) hardware layer, (ii) infrastructure layer, (iii) platform layer, and (iv) application layer, as shown in Figure 1. These four layers are hereafter described:(1)Hardware layer: it is also referred to as “server or physical layer.” The hardware layer is responsible for handling all physical resources such as servers, switches, routers, and cooling system, which are implemented in a data center. This layer is the lowest layer of the infrastructure, where servers are interconnected through switches and routers [21, 30, 31].(2)Infrastructure layer: it is also known as the “virtualization layer.” This layer allows full access and control of infrastructure responsible for setting up active directory and protocols. The infrastructure layer is an essential part of cloud computing because it provides virtualization of the physical resources. This layer also consists of multiple sublayers such as virtual network, virtual storage, and Virtual Machine (VM). This layer pools storage and computational resources to deliver services according to users or business requirements. Also, it necessitates a thorough understanding of load balancing as well as all other hardware virtualization concepts [21, 30, 31].(3)Platform layer: it provides the Application Programming Interface (API) for the implementation of application frameworks. This layer offers API support for implementing database, storage, and business logic of web applications, for example, Google App Engine (Python framework). The platform layer comprises an operating system and application framework [21, 30, 31].(4)Application layer: the functionality of this layer is accessible via the Internet to deliver services such as Gmail and Zoom. It is the most utilized and most important layer because it is close to the end-users of cloud computing [30, 31].

3.2. Applications of Cloud Computing in Healthcare

In healthcare, wearable sensors are used to collect a considerable number of vital signs that monitor and diagnose illness using biological data from patients. These biological data can be used in diagnosis, interpretation, and proactive action in several scenarios through early prescription of medicine to patients. For instance, regularly evaluating glucose levels or heart rate after a clinical surgery is critical in assessing the recovery status of patients, particularly elderly patients [32].

Several works have been done in wearable health monitoring. For instance, the authors in [33] discuss the design of a Cloud-Based Intelligent Health Care Service (CBIHCS) that does real-time monitoring of blood glucose, weight, and heart rate for diagnosing chronic illnesses such as diabetes. In the work, fundamental body signs are collected with the help of body sensors and the collected data is sent and stored in the cloud to perform analysis and classification. In [34], the authors established a solid and proprietary privacy security system named Privacy-Preserving Disease Prediction (PPDP). Patient health records are encrypted in PPDP and uploaded to a cloud server. The risk of having a disease, which is based on AI prediction algorithms, was then reported for new clinical knowledge. A prediction of heart disease in cloud environment was provided in [35]. Attempts were made by the authors to propose a suitable model based on patient information in order to help physicians predict heart disease. The paper presented different results of various models that were implemented with the heart disease dataset. The result revealed that the Naïve Bayes model provided the highest accuracy of 86.42% followed by the AdaBoost and boosted tree. Furthermore, the three models were combined to give an enhanced accuracy of up to 87.91%. The experiment was conducted on a cloud environment using 10,082 instances, which produced an overall effect of significantly reducing the execution time with a maximum accuracy.

The design and deployment of a genomic cloud were undertaken in [36] to enhance the computing capability and storage and provide more flexibility and easy analysis through the use of genomics software. The implementation of the genomic cloud infrastructure is based on multiple technologies, which includes Common Workflow Language/Workflow Description Language (CWL/WDL), Docker, Network Attached Storage (NAS), Database Availability Group (DAG), and Object Storage System (OSS). The developed cloud infrastructure also assists the user in generating high-performance clusters, managing tremendous genomic data files, and scripting genomics analysis pipelines.

3.3. Impact of the Utilization of Big Data in Cloud Computing for the Healthcare Ecosystem

Cloud computing and big data analytics in the healthcare ecosystem are two disciplines that are unexpectedly revolutionizing the field. These technologies have brought about powerful results and attractive benefits. Every second, a massive amount of healthcare data is generated, and, as such, tremendous computational resources are required to process these enormous data. Unfortunately, small and medium enterprises cannot afford the huge computational resources necessary to meet business needs [37, 38]. Nonetheless, there exist some factors that affect a cloud environment in the utilization of big data as hereafter presented [39]:(1)Data storage: big data analytics require high-performance hardware for the analysis and storage of data. As data increase continuously, CSPs are supposed to increase the storage capacity to remain competitive.(2)Availability and reliability: CSPs have challenges in terms of delivering the service 24/7. Monitoring the provided service is crucial and, therefore, a critical evaluation of the SLA to ensure performance is essential.(3)Performance and bandwidth cost: increasing the bandwidth size rather than purchasing hardware equipment so that the delivery of services becomes faster will lead to spending less money on hardware. However, big data requires both increased hardware and bandwidth; therefore, delivering a large amount of data every time, regardless of the location, can be expensive.

These few factors have led researchers to introduce new paradigms for enhancing computing capability by leveraging resources that are available from other CSPs or distributed nodes. This is to ensure that computational resources are close to the location of the user so as to reduce latency and improve bandwidth utilization.

The different cloud computing paradigms that have been introduced in the IT domain to address the problems that traditional cloud computing faces include federated cloud (or cloud federation) and fog and edge computing.

4.1. Cloud Federation

In most cases, CSPs fail to fulfil customers’ requirements due to growing demands for cloud services. Therefore, relying solely on a single cloud provider can stop users from having high-quality services whenever they need to. In this case, as the workload increases, the cloud federation assists CSPs to scale up by renting resources from other providers [40].

A cloud federation is a partnership between various entities in which companies can benefit by accessing resources hosted on another cloud environment [41, 42]. Efficient use of energy and resources are two crucial differentiators in the contemporary cloud computing marketplace. Regardless of how big cloud computing providers can be, they do have a finite capacity. Therefore, a federation of cloud computing infrastructure allows growing beyond the providers’ capacity. Besides, it enables collaboration and resource sharing [31]. Cloud computing is built around an advanced orchestration model, which serves as a connecting point between the available and compatible resources and users’ requests. This model allows the dynamic provision of resources to satisfy both the providers’ and users’ requirements. Thus, a generic end-user may transparently access any potential computational resource (e.g., CPUs, storage, and network) needed by moving freely from one CSP to another [43].

4.1.1. Cloud Federation Architecture

The fundamental architecture of a federated cloud is illustrated in Figure 2. It consists of several CSPs that offer services to different clients that can access the federated resources on demand [43]. The different components of the federated cloud are described as follows:(1)A front-end component is a point that allows users to access the whole platform.(2)Cloud service broker component is responsible for distributing resources according to user’s requirement. This component is in charge of billing and metering of the federation platform.(3)The resource interface component is the point that connects all the cloud computing platforms.(4)The user allocation table contains the credentials of the user and the association between users and activated service.(5)The trusted identity component is an interface that handles the credentials and encrypted data stored in the database, which the platform utilizes in order to authenticate the federation.(6)The cloud connector is a module that works as an interface with the federation.(7)Master cloud monitor is a module where cloud agents are set up in the federation to read specific metrics with greater precision than the resource interface component. The master cloud monitor (MCM) collects cloud agent relevant information to improve cloud interoperability.

The entity in charge of managing the use, performance, and delivery of cloud services is known as a cloud broker. This entity negotiates relations between cloud providers and cloud consumers. A cloud service broker can be defined as a “cloud service partner that negotiates the relationship between Cloud Service Customers (CSUs) and Cloud Service Providers (CSPs).” The two central cloud brokering stakeholders are CSPs and CSUs. CSUs can get economical solutions using a cloud broker, whereas CSPs can gain new ways to develop services and raise profit [44].

Accumulation, integration, and customization of cloud brokerage services play a threefold role. Accumulation involves the collection and provision of different cloud services to the end-user. Integration refers to linking the cloud service as an intermediary with the internal system. Customization involves either adjusting the cloud service in compliance with the user’s requirements or the creation of the cloud application. The threefold role of the cloud services broker resolves the limitations of a single cloud service. The threefold role addresses the limitation of data loss due to dependency on a single unique cloud and platform failure. It also allows easy management of data across multiple Cloud Service Providers [45].

4.1.2. Application of Cloud Federation in Healthcare

The relevance of cloud federation in healthcare is expressed by the fact that a single cloud may not be sufficient to process the huge amount of medical data being generated every second. Therefore, sharing resource among many CSPs is a major contribution of the federated cloud in the evolution of cloud computing. For instance, in African countries where computing capability is a tremendous concern, such facilities would allow the interchange of healthcare records, permitting access to expertise that is not locally available, and enable flexibility and cost-effective execution of tasks on computing resources [46].

For instance, the authors in [46] proposed a cloud federation system for healthcare using cooperative and competitive cooperation models. The work aimed at connecting multiple medical centers throughout Africa. Simulations were conducted using two new allocation strategies to assess the efficiency of the models using Genetic Algorithm-Based VM Allocation (GAVA) and Secure Roommate Allocation (SRA). In [47], the authors developed a Software-as-a-Service (SaaS) by leveraging the intrinsic security function of the Blockchain technology. This allowed a medical cloud to create a federation with others in order to coordinate a virtual healthcare team involving doctors from various federated hospitals who collaborate to conduct a healthcare workflow. Reference [48] proposed a cloud federation framework that allows the sharing of healthcare and medical resources among different CSPs with the ease of timely access that guarantees integrity and privacy of data. The approach used was validated by conducting a series of evaluation studies with the help of CloudSim toolkit.

4.2. Edge Computing

Internet-enabled applications such as virtual reality, surveillance, augmented reality, and real-time traffic monitoring require quick processing and rapid response time. End-users usually run such applications on their resource-limited mobile devices in most cases. However, the core operations and processing are carried out on cloud servers by moving the processing to the network’s edge. Edge computing has proved to meet the mobile application requirement of fast response times [49]. The European Telecommunications Standards Institute (ETSI) described mobile edge computing as a technology that provides an environment for Internet services and cloud computing on the edge of networks, near radio access networks, and mobile users.

This model significantly reduces the delay in transmission and network burden and increases the density of nodes and the mobility support to bring the processing capability close to the user [49, 50]. Edge computing is a modern model for processing of part of the data at the network edge. There is controversy among researchers regarding the meaning and position of the edge. Some view “edge” as IoT-connected devices with limited resources that process the information gathered. Other researchers see “edge” as a concept that transfers data processing to the source [9]. For many scholars, edge computing collects devices, equipment, and network resources that produce, gather, and send data to remote cloud centers [51]. Different authors provide definitions of edge computing from various perspectives in terms of architecture, technology, capability, or characteristics, as shown in Table 1.

4.2.1. Main Differences between Cloud Computing and Edge Computing

Unlike the cloud computing paradigm, edge computing provides local and decentralized infrastructural services while taking the required resources closer to data sources and avoids data transfer requirements to a centralized node. Moreover, edge computing has the advantage of delivering real-time responses with very low latency, handling privacy issues, reducing data communication, improving bandwidth utilization, and reducing energy consumption [58]. Cloud computing has high latency, presents a slow response time, and has no offline mode. At the same time, cloud computing is scalable, processes big data, and has unlimited computational processing abilities. In contrast, edge computing is storage-limited, requires interconnection through proprietary networks, and is highly power-consuming [50, 51, 59, 60]. Moreover, edge computing can be combined with cloud computing to enhance the efficiency of both approaches yielding a hybrid edge-cloud computing model [58].

4.2.2. Architecture of an Edge Computing Platform

The basic structure on which an edge computing platform is built is summarized into three major parts, as shown in Figure 3. These components are hereafter explained:Edge devices: these are on-premise edge equipment that gathers information or communicates with edge data, including video cameras, sensors, and other electronic components. Primary edge devices can gather data, transmit data, or both. Edge devices that are much more sophisticated have more computer capacity, enabling them to perform more operations. The ability to deploy and maintain applications on these edge network devices is vital.(1)Edge node: the edge network layer and edge servers can be real or virtual servers located in different remote sites or merged in hyperconverged infrastructure. This layer of the edge computing architecture is divided into two sublayers: the edge server, which helps to store and perform small computation needs, and the edge data center that is responsible for delivering a portion of intensive data processing close to the user’s location. The edge data center is usually connected to a more significant cloud data center that offers more storage capability and computational power.(2)The cloud: this can run on premise or in a remote public cloud. It handles the processing that is not possible at other edge layers.

4.2.3. Applications of Edge Computing in Healthcare

As mentioned earlier, the significant difference between cloud computing and edge computing resides in how and where data processing is involved. With cloud computing, the data is processed and stored in a centralized location, while with edge computing, the data is processed close to the source. Processing data near the source brings another connotation regarding response time, bandwidth, and real-time interaction. As technological development advances, the healthcare system needs specialized equipment that enables fast analysis and data processing, better security, cost-effectiveness, and so forth. Combined with IoT system, edge computing has brought a severe revolution in the healthcare ecosystem. For example, [61] used cognitive computing to monitor and examine users’ physical health. It also changes the computational allocation of resources of the entire edge computing network in accordance with each user’s health-risk level. The study shows that the edge cognitive computing-based healthcare system enhances customer experience, effectively allocates computational resources, and dramatically increases patient survival rates in emergency situations. The study in [62] introduced LiveMicro, a platform which provides the benefits of enabling edge computing-driven digital pathology computations, such as image processing on a live capture of a microscope and allowing remote pathologists to diagnose in real time in a single virtual microscope session. This allows for continuous medical education and remote consultation, which is vital in underserved and remote hospitals and private practices. The work in [63] proposes a healthcare system architecture for monitoring to allow remote communication with optimal bandwidth and short response time for a fast decision-making process for preliminary diagnosis in a virtual environment. The proposed system enables the filtering and compression of a patient’s record with a functional algorithm. An open-source and low-cost approach to assist the triage in the emergency department is presented in [63]. The main objective of the study is the COVID-19 prescreening, fever, and cyanosis noncontact detection.

4.3. Fog Computing

A new paradigm called fog computing emerged in 2014. The paradigm of fog computing offers improvement in the usage of resources. It also suggests an improvement in terms of reducing the latency for latency-critical applications [64].

Cloud computing’s centralized design prompted researchers to establish a distributed technology as a cloud computing extension, which is similar to providing consumers with services offered by cloud computing technology. Fog computing helps to bring the computational resources closer to the source of the generated data. Fog computing allows storage, computing, and networking services between traditional cloud computing data centers and devices [65].

Due to the extreme limitations on the computing resources of IoT devices, it is common to discharge tasks requiring substantial computational resources to computer systems with sufficient computing resources such as High-Performance Computing (HPC), cloud systems, or data centers for processing [66, 67].

While massive data centers are typically used in cloud computing, fog computing uses small servers, routers, switches, set-top boxes, gateways, or access points since fog computing systems consume less space compared to cloud computing systems, thereby locating hardware closer to users [68]. Fog computing provides a significant improvement in cloud protection, efficiency, and accessibility by providing a robust and distributed communication system with a short delay of about 10 ms and high throughput in the order of 10 Gbps. Therefore, the fog computing environment complements cloud computing by allowing computing to be deployed immediately at the network’s edge. Also, QoS is a fundamental fog service metric that should be considered in four aspects of delivering the fog services, which are connectivity, reliability, capacity, and delay [69, 70].

Hence, fog computing is a model that enables low latency computing, where fog nodes provide partial validation of transaction which does not require considerable computational resources and cloud servers provide the final transaction validation when there is a need for substantial computing capability. This allows overcoming processing capability issues of IoT devices and helps in having short response time to ensure QoS [71].

4.3.1. Architecture of Fog Computing Platform

Fog computing architecture, as illustrated in Figure 4, is composed of three layers, namely, (i) IoT or user layer, (ii) fog layer, and (iii) the cloud layer [72], described as follows:(1)The IoT layer incorporates a large number of heterogeneous and omnipresent devices that produce physical world information. It is a worldwide network of affiliated entities based on shared communication protocols [73].(2)The fog layer is placed between the IoT layer and the cloud layer. The core service node in this architecture is the fog node. The fog layer consists of many elements, namely, gateway, routers, network edge server, access points, and other devices. This layer can process, transmit, and store data temporarily [74].(3)The cloud computing layer offers the possibility of processing a large amount of data and gives a wide range of services.

Fog nodes are placed nearest to IoT units, ingesting data from them. Then, the fog node routes the collected data to the most suitable location for data processing. The fog node then collects, processes, analyzes, and stores the most sensitive data. Moreover, any indication of issues relating to faults can be detected by the sensors at the closest fog node and response time can be sent to the actuator. The fog node must support both the integration and environment of IoT devices with the cloud. To effectively handle IoT resources, a computing model that can help both ends, IoT and cloud, is needed. Furthermore, with the appearance of the sixth generation of wireless communication (6G), resource management in fog computing would be easier due to the larger communication channel it offers [75]. Therefore, fog computing is characterized by several factors specific to it such as low latency, real-time analysis, bandwidth conservation, high level of security, geographic distribution, proximity to users, business agility, overall service management, and redundancy [13].

4.3.2. Application of Fog Computing in Healthcare

In [76], the authors designed a system that involves a series of software modules that enable a phobia patient to engage in a therapy session with a remote specialist therapist. The victim and the therapist share the same virtual reality environment. The fog paradigm was used to satisfy the stringent Tactile Internet specifications, such as a 1 ms round trip latency. The architecture uses high-level interfaces for communicating with external software applications and a broad range of hardware devices. The work in [77] presents an automated technological analysis for telesurgery which was based on 5G, IoT, Tactile Internet, and Artificial Intelligence (AI). A fog assisted interactive model is used to reduce latency during the process. Furthermore, in the study in [78], the authors performed the assessment of a prototype that gathered the patient’s electrocardiogram traces and utilized the user’s smart device as the fog layer to safely share his data with certified parties. This allowed patients to share information with doctors.

5. Opportunities and Challenges in Cloud Computing

This section gives first the obstacles and opportunities in cloud computing, presents the challenges of each technology of the evolution of the cloud, and gives the challenges of the evolutionary trend in general.

5.1. Obstacle and Opportunities for Cloud Computing Development

The cloud computing field gives the IT industry tremendous opportunities. It offers an opportunity to innovate businesses and transform them into a more reliable and flexible business service. Cloud computing is at an early stage in many organizations [79].

Some opportunities that this technology provides are summarized in Table 2.

Cloud computing reflects a tremendous transition in ICT and aims to offer immense benefits to many organizations and businesses. Cloud infrastructure aims to significantly reduce ICT running costs [80, 81]. It also promises to deliver scalability [82], reliability and availability [83], agility [84], and flexibility [85]. Besides, it promises to offer many other advantages to businesses to enable organizations to concentrate on business processes while leaving the IT business to the cloud provider. However, while cloud infrastructure has several advantages, many problems need to be tackled as a feasible ICT solution, especially in developing countries. According to [86], in general, cloud-based issues have been divided into six main categories, namely, data management and allocation of resources, security and privacy, load balancing, scalability and availability, server migration and compatibility, and interoperability and communication between clouds. Each of these problems has influenced the reliability and performance of their principles in cloud-based environments. In summary, the main challenges in cloud computing are provided in Table 3.

5.2. Challenges in Cloud Federation

A federated cloud computing system is a dynamic distributed system consisting of pooled computational resources. The unexpected user requests and the consequences of external events that are beyond user and system administrator control may cause several challenges, such as the following:(1)Reliability and interoperability: since a cloud federation is a partnership between several CSPs [94], the heterogeneity of the entire federation may affect its performance. Ensuring the compatibility of the shared resources within the federation and avoiding any type of anomaly so that user’s request will be efficiently treated regardless of the technology that CSPs use is a very challenging task [95]. The need for automated ways of detecting anomalies in the federation is critical to avoid the SLA violation among CSPs and also between CSPs and users. Thus, the detection of an anomaly in an automated way would be helpful for disaster management.(2)Resource pricing: customers and providers of cloud resources are rational and willing to increase their interest as much as possible while consuming and sharing resources. Since the user’s requests are handled within the federation, the pricing mechanism is then used to control the individual rationality of customers and suppliers. Therefore, the need for dynamic pricing strategies is crucial [96].(3)Load balancing: it is usual to have more than one provider to process the user request in the federated cloud environment. In such situations, the strategy needed to allocate the user request equally between CSPs using load balancing methods becomes complicated for sharing the workload transparently [97].

5.3. Challenges in Edge Computing

Several technological challenges have resulted from the complexities of edge computing, such as reliability, mobility management, heterogeneity, security and privacy, scalability, and resource management. Also, edge computing faces other significant open challenges, such as the following:(1)User trust in edge computing systems: the success of all innovations is positively related to its acceptance by customers. Trust is considered as one of the significant factors in the endorsement and adoption by users of edge systems. As customer confidence is closely related to the protection and privacy of technology, if the user’s data are not adequately handled, the consumer trust will certainly be broken down. Therefore, the technical advantage of these systems/technologies is not accepted. Therefore, research efforts to build customer confidence models for edge computing systems must be undertaken [98].(2)Agile and dynamic pricing models: a challenging task is developing dynamic and agile pricing models, as one pricing model may not be successful for multiple consumer interactions. With best-fit pricing models that can offer mutual gains for service providers and consumers, heterogeneous edge computing systems are also difficult to include. However, for the development of dynamic pricing models for edge computing systems, the pricing model for cloud services such as “pay-as-you-go” may be utilized [99].(3)Discovery, delivery of service, and mobility: service discovery in distributed edge computing systems is challenging considering the rising number of mobile devices that need services simultaneously and uninterruptedly. Since the delay involves identifying and choosing the other available facilities and resources, this task becomes more difficult. In heterogeneous edge computing systems, the automated and user-transparent discovery of suitable edge computing nodes according to necessary resources also presents a challenging task for service discovery mechanisms. However, applications for peer-to-peer networks may be proposed to contribute to the design and creation of efficient user-transparent solutions for edge computing systems [100].(4)Cooperation among disparate edge computing systems: the edge computing system comprises various heterogeneous technologies that serve to achieve the communication of information. The heterogeneous aspect of edge computing infrastructure enables this technology to access different edge devices using other wireless mechanisms. Synchronization, data confidentiality, load balancing, and interoperability are also part of challenges in the edge computing environment due to its heterogeneous nature [101].(5)Low-cost fault tolerance deployment models: edge computing is built based on a mechanism that enables high availability, efficient disaster management, fault tolerance, and so forth. However, the issue with this technology is that building a low-cost fault tolerance mechanism is extremely difficult [49].

5.4. Challenges in Fog Computing

There are some open research problems in fog computing. However, many of the challenges in fog computing are similar to those faced by edge computing. Its relationship with edge computing is because fog computing is an implementation of edge computing [102]. Heterogeneity, QoS management, scalability, versatility, federation, and interoperability are the most pressing problems of fog computing [103].

Because of its location at the edge of the Internet, the fog network is heterogeneous. The fog network is responsible for linking each part of the fog. However, network management, maintaining connectivity, and delivering services, particularly in large-scale IoT scenarios, become more complicated [69, 70]. The challenges of fog computing are listed as follows:(1)Mobility: in several domains such as healthcare, smart cities, and the Internet of Vehicles (IoV), the fog nodes are primarily mobile and make data management in terms of data storage, data resource provisioning, resource availability, and service migration a lot more challenging [68].(2)Security: data privacy-preserving and data protection are two critical challenges for data management because of the mobile nature of nodes in fog computing. Therefore, data collection, data sharing, data replication, data offloading, and data aggregation become more complex. Also, authentication and data access control are complicated to manage in unsecured fog nodes [104].(3)Distributed processing: efficient local processing on mobile or static nodes is a critical concern in distributed data processing. Since distinct behaviour and responses can be described for various situations, identifying data contexts in fog nodes in order to solve problems correctly under the right conditions is another problem [105].(4)Storage and computational resources: complex data analysis or long-term data storage is difficult to achieve on fog devices due to storage and computing resource limitations [106].

5.5. Summary of the Gaps Identified in Each Technology

Table 4 gives an overview of the challenges encountered in cloud computing and its related technology.

5.6. Challenges of the Evolutionary Trend of Cloud Computing

Some quality attributes such as power consumption, latency, privacy, fault tolerance, and sharing of resources have been taken into account in assessing the performance of the evolutionary technologies of cloud computing [107]. Some other quality attributes such as the heterogeneous nature of the cloud paradigm might influence decision-making in identifying the critical quality attributes and corresponding metrics to quantify the importance of choosing a specific cloud paradigm [108]. The heterogeneous nature of the cloud paradigm adds complexity to decision-making regarding where to implement one of these technologies among all possible combinations. This requires a thorough analysis of various aspects that can influence the SLA [16]. Most literature only focus on quality attributes that are easily measurable; however, some other parameters to consider may appear to be relevant.

As has been mentioned in the literature, cloud computing paradigm is heterogeneous environment; moreover, compatibility, portability, and maintainability seem to be quality attributes that are not included in the evaluation of the performance of the different cloud paradigms. These new parameters are relevant because portability refers to “the ease with which a device, product, or component can be moved from one hardware, software, or other operating system or user environment to another.” Compatibility refers to “the ability of a device, system, or component to communicate with many other products, systems, or components,” and maintainability, which may be assessed by the modularity, refers to “the degree to which a device, system or computer program is made up of discrete components, such that changing one does not affect the others” [109, 110].

For overall communication in new cloud computing paradigms, sensing devices, computers, and other electronic equipment are incorporated. It involves a wide variety of geographical locations, leading to a broader risk of vulnerability. The process of authorizing and authenticating a large number of nodes is complex. Strategies that can dynamically assess the security of different nodes are needed [13].

The tremendous augmentation of data may cause disasters that cannot be unearthed at a small scale. This phenomenon renders fault diagnosis and tolerance more challenging in the management of resources in terms of resource monitoring. It can affect the integration or implementation of the cloud computing paradigm in diverse applications where a decision must be considered in advance [13].

The cost of each new paradigm impacts the approbation of the technology in the marketplace. Nevertheless, the deployment of the new paradigm of cloud computing considers the necessary price beforehand. However, the cost of installation and configuration tools are causes of significant concern when adopting a cloud computing paradigm in a specific application [93]. Meanwhile, other technologies such as cyber-physical systems are gaining ground over a considerable range of applications and businesses. A cyber-physical system provides a controllable, credible, and scalable physical system that profoundly embeds the capability of computation, communication, and control, which is based on data acquisition from IoT. The integration of cyber-physical systems with the cloud paradigm, though presenting an increased cost, results in the realization of safety-critical systems that are more robust [111, 112].

The security aspect in the emerging paradigms can be an entire area that needs more attention. A large number of sensitive transactions are carried out within the cloud environment and can impact user trust. Distributed Denial of Service (DDoS) is the most encountered attack in the cloud environment, and there is no effective solution that helps eradicate security issues [113]. Several techniques have been proposed to solve the security matter. For instance, [114] suggested a Blockchain-Assisted Secure Fine-Grained Searchable Encryption (BASE) for a cloud-based healthcare cyber-physical system that provides an attractive level of security but requires considerable processing power. Meanwhile, many other techniques such as Software-Defined Networking (SDN) can, in certain circumstances, help to improve the DDoS attack detection and mitigation capabilities of the cloud [115]. Therefore, deploying additional policies for security involves extra effort such as cost, development of solid encryption algorithms, high demand of computational resources, and high level of monitoring.

6. Conclusion and Future Research

Cloud computing, cloud federation, edge computing, and fog computing are key technologies that have revolutionized the history of the IT domain. These paradigms have significantly changed how people process, store, and transmit data worldwide. These have also led to research in developing technology that changes drastically with time. Therefore, this paper has presented a detailed review of all these new paradigms by particularly illustrating their contributions to the healthcare ecosystem and presenting challenges that militate against the performance of each technology generally. Also, we provided some details about the architecture and improvements that they have been brought to cloud computing. A future research direction includes a systematic review of machine-learning algorithms that help to identify anomalies in a federated healthcare cloud environment to improve its QoS, which can be altered due to its heterogeneous aspect. A comparative study of fog-edge computing and the cyber-physical system will also be explored.

Data Availability

This is a review article and no underlying data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

The authors acknowledge the Covenant Applied Informatics and Communication Africa Centre of Excellence (CApIC-ACE) domiciled at Covenant University for funding this work with the ACE Impact grant from World Bank through the National University Commission, Nigeria. The Covenant University Center for Research, Innovation and Discovery (CUCRID), Covenant University, is also acknowledged for providing fund towards the publication of this study.