Abstract

The huge bandwidth and hardware capacity form a high combination together which leads to a vigorous development in the Internet. On the other hand, different problems will come up during the use of the networks such as delay and node tasks load. These problems lead to degrade the network performance and then affect network service for users. In cloud computing, users are looking to be provided with a high level of services from the service provider. In addition, cloud computing service facilitates the execution of complicated tasks that needed high-storage scale for the computation. In this paper, we have implemented a new technique to retain the service and assign tasks to the best and free available node already labeled by the manager node. The Cloud Computing Alarm (CCA) technique is working to provide all information about the services node and which one is ready to receive the task from users. According to the simulation results, the CCA technique is making good enhancements on the QoS which will increase the number of users to use the service. Additionally, the results showed that the CCA technique improved the services without any degrading of network performance by completing each task in less time.

1. Introduction

In the earlier days, the online services started to be desirable from the end users; many promotions about new productions show a great challenge for public and private companies to attract users. Cloud computing is starting to be used widely according to the rapid development of the Internet with new modern devices such as smart phones, tablets, and others. Businesses have increasingly invested in cloud computing services. According to International Data Corporation (IDC), cloud computing services were used up to 42 billion times in 2012, compared with 16 billion in 2008. Public companies such as Google, Yahoo, Microsoft, and Amazon are already invested in cloud computing services. Hence, the network bandwidth and developing hardware are leading to create new technologies related to cloud computing [13]. Cloud computing allows users to use more applications and software from any place because many nodes can cooperate to perform the request services. On the other hand, the Internet applications keep on getting updated and developing with high-performance multimedia and high quality devices in the network [2, 4]. In the cloud computing environment, players can log in and access Internet applications software. Also, cloud computing can offer a different environment according to the services need, and the infrastructure for the Internet is growing more which leads to allocating the computing system services to different places. The advantages of cloud computing by offering access to services for users also encourage distributed system design and application to support user-oriented services applications. The busy and load CPU node in cloud computing was always considered to be a problem requiring a load balancing mechanism or a technique that can distribute the job tasks between the service nodes; hence, CCA technique is one of the solutions to alleviate the impact of loads on the service nodes in the network. Recent researchers are devoted to find out different solutions by involving several techniques to make load balancing on the service nodes that they need to execute the job task which is required by the users. The aim of cloud computing is to get high-performance computing power, which is normally used by a huge number of users (e.g., in military applications) to perform tens of trillions of computations/sec. The cloud computing forms large groups of servers typically running with low-cost consumers. Many services are running through cloud computing over the Internet. These services are broadly divided into three terms: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS) [2, 57].

The Infrastructure-as-a-Service (IaaS) (e.g., https://www.amazon.com) provides the customer with virtual server instances and storage as well as Application Software (S/W) Interfaces. Application Program Interface (API) allows users to access and configure their virtual servers and storage. The Platform-as-a-Service (PaaS) is a set of tools hosted on the service provider’s infrastructure. Through PaaS, users can create applications on the provider’s platform over the Internet. Some APIs need to be installed by customers on their Personal Computer (PC). The Google applications (GoogleApp) are a good pattern of PaaS. On the other hand, Software-as-a-Service (SaaS) provides hardware infrastructure by interacting with the users [8, 9]. All services can be used through a front-end portal. SaaS is widely used and it can be anything from web-based and database processing and the user can access and use the services anytime. The cloud computing is considered a distributed system because it has the capabilities to provide multiple external customers by using internal technologies. Service providers provide many services to end customers. They have service level agreement (SLA), according to which service providers must deliver services with a better performance to the end users. The service may suffer disruptions for short durations due to the many factors that affect the network, causing economic damage and degrading network performance. Therefore, researchers proposed a number of different solutions for the cloud computing services in the network. This is because flooding and load over flow are common and varied in the everyday operation of the cloud computing network. When load occurs, it causes undesired behaviour such as the delay for the completed task is increased because the preprocessors in the service nodes cannot execute many high complexity jobs during a short time. Therefore, the aim of this paper is to develop and propose a potential solution for the cloud computing network through alleviating and allocating job task based on the service node availability. The proposed algorithm should reduce the time to complete each task required by the users.

The main contribution of this paper is how to make the manager node determine and label the free service node based on its response by running the CCA technique. The free service node will state as ready to receive any task which is demanded by the user to reduce execution time and make the service faster.

The rest of the paper is organized as follows. In Section 2, we describe the related works about incentive schemes. In Section 3, taxonomy of cloud computing is divided into two sections; the first is about the combination between hardware and software in the cloud networks and the second part of this section is about the cloud computing services. In Section 4, we have discussed our proposed technique and its mechanism. Section 5 shows some mathematical modeling and theoretical analysis for the CCA technique. The results have been showed and evaluated the performance of the proposed technique in Section 6. Finally, the conclusion and the future work have been discovered in Section 7.

The network protocols form a very important factor to build the network information which lead to construct the best routes between the nodes. This routing table leads to transmitting the data from source to destination via the best path. Some protocols such as Open Shortest Path is widely used in the networks. The OSPF protocol uses Dijkstra algorithm [10] as a routing algorithm in the network. All routers use the Dijkstra algorithm to compute the shortest path tree (SPT). The IP packets are routed by OSPF protocol based on the IP destination address, which is in the IP packet header [11]. When OSPF is used in the network, each router discovers and maintains a full picture view of the network topology by flooding Link State Advertisements (LSAs), which form the complete topology graph for the routers in their memories, known as a link state database. This database contains information about Autonomous System’s topology, and each router will have an identical copy of this information to ensure that data packets will be forwarded over the adjacent routers interface without creating loops in the network topology [12]. The OSPF can divide networks into a set of groups called areas, as shown in Figure 1. The cloud computing networks also can be divided into areas and each area can has head nodes called also the manager nodes. In all companies that provide the cloud services, they are using the routing protocol which aims to make the user executes his task with its all networks services [13]. Many products can be offered by cloud computing services. Each service might be required a different open web OS, which is one of the web applications that shows various environments. In cloud computing environments, the applications may be degraded by errors on virtual resources [1, 2, 4]. This is because of the hierarchical architecture in the cloud computing component. Cloud computing is a part of distributed computing system wherein need to be provided for external customers [14]. For the cloud provider, the service should be achieved and execute large number of tasks for the users regarding the strong infrastructure for the cloud computing. In [9, 10], a new technique called ZXTM was introduced that shares load balancing services and offers a low-cost starting point. Another new technique developed by ZEUS Company has developed software to provide every customer a dedicated application. Thus, many frameworks were provided by ZEUS to enhance and develop new cloud computing methods [15]. According to the proposed frameworks, three-level hierarchical topology are involved to be adopted in the cloud computing. Each node in the cloud computing system can provide full information for the usability and load, which encouraged enhancement of the system performance. Many different methods were used to collect such information of each node [16]. The agent in cloud computing is considered one of these several methods. The agent technology has inherent navigational autonomy and is able to send to some other nodes. In addition, the agent does not need to be installed for each node in the network, but it can be gather the required information through participating in the cloud computing environment, such as CPU utilization and remaining memory. In [8, 17], the authors described a new algorithm called Opportunistic Load Balancing (OLB), which attempts to keep each node busy; hence, the algorithm does not consider the workload for each node. The advantage of the OLB algorithm is to achieve load balancing between nodes; it distributes unexpected tasks to different available nodes regardless of the node workload [18, 19]. In [5], the authors proposed a Minimum Execution Time (MET) algorithm that assigns each task to the node that it is expected to execute in a short time. The MET algorithm dispatches the task without checking the workload for each node; therefore, the load balancing will not be achieved in this case [20, 21].

3. Taxonomy of Cloud Computing

There are many taxonomies of cloud computing networks and different companies started to offer cloud services according to the perspective of enterprise IT. Cloud computing network has four layers for software applications:(i)Software-as-a-Service (SaaS).(ii)Platform-as-a-Service (PaaS).(iii)Infrastructure-as-a-Service (IaaS) or Hardware-as-a-Service (HaaS) which are considered similar. HaaS is a subset of IaaS and they define a service provision model for hardware that is defined differently in managed services and grid computing contexts. In managed services, HaaS is similar to licensing. In grid computing, HaaS is a pay-as-you-go model.Briefly, the cloud computing architecture layers are used when needed because they are considered as on-demand services to do or perform a specific job. The users can access the cloud computing services from everywhere around the world, but they need an access point with high performance network and also a permission from the cloud computing to access their accounts as shown in Figure 2.

3.1. Virtualization of Cloud Computing Management

The combination between hardware and software, such as an operating system, is the subject of virtualization management. Cloud computing reduces the cost and enhances the business value in order to improve agility in this regard [22]. The virtualization in cloud computing has many different types, such as server virtualization, storage virtualization, and network virtualization. The main idea of virtualization is to provide an important advantage in sharing resources, manageability, and isolation in the network.

3.2. Cloud Computing Services

Cloud computing networks are the applications and hardware with software. All services are indicated through SaaS. The service provider containing the hardware and software integrated together forms the cloud. Any application requires a model of computation, storage, and communication. Moreover, cloud computing resource allocations and service provision are divided into different layers, as mentioned above. Hereafter, the types of service provisioning can make need-based selections in cloud services. The agility and availability give a great advantage through changing resources in cloud computing quickly, without expenditure. Agility can be measured as a rate of change metric [23]. If cloud computing is considered in terms of agility, then the service provider needs to understand whether the service is elastic and flexible [24]. Here, the pricing for the services between the service provider and users necessitates a service level agreement, which acts as the interface between the data centre and the users. The SLA leads to offer a high quality service to users based on the QoS requirements and ensures resource availability. Security and trust form important factors in cloud computing. Many studies have arisen in developing SW to support the cloud computing system, to identify the security and privacy and then determine a suitable cloud service provider according to the requirements, particularly relating to QoS. The author in [25] proposed the idea of integrated QoS utility to find out the problem of cloud service provider based on multidimensional QoS approach for increasing the user sanctification for the best utilization. The authors in [26] addressed four main issues to solve the service selection problem in the cloud computing within sociability, flexibility, and autometric user support. However, all these parameters such as SLA, QoS, and security are forming important factors in the cloud networks or networks general. When all these factors have been achieved, then the performance of the network will be improved by reducing delay time and increasing the network throughput as shown in the results section. Additionally, the users will start to trust cloud computing service, when they will start to realize that their privacy is secured and protected by the service provider through the SLAs.

4. Proposed Work (Cloud Computing Alarm Techniques)

In this paper, we have proposed a new technique that will lead to notify the node manager about the free and busy node. In addition, the proposed technique will aid in distributing the tasks among all nodes present on the cloud. The Cloud Computing Alarm technique comes to improve service in the cloud computing via providing more information about all nodes under the node manager to distribute the required task requested by the users. Further, this technique requires sending a small packet to the manager node to inform it that it can receive some tasks to execute. Once the free labeled node will receive the task, then the CCA techniques which are running in all nodes will send a new message to update the information and send it back to the node manager on the cloud computing networks. When the user starts to log in to do some tasks, the node manager will send these tasks to the appropriate node based on the information provided by the CCA.

Whereas, the performance of the CCA technique is considered to minimize the completion time of all tasks. The biggest challenge in CCA technique is to find the free node services, but the enhancements for the CCA technique will be considering the work loading and CPU usage percentage for each node service to assign the task to the best available node. In Figure 3, there are three hierarchical levels for the cloud networks. It shows the first level, which forms the request node manager used to assign the task to a suitable manager node service. For the second level, it contains the service manager used to select and distribute the task to the node service. Figure 4 illustrates the router model when the routing protocol is working in the network. In this figure, we have showed the Open Shortest Path First (OSPF) placement in the route processor. The OSPF allows for the administrated router in each area to hide the other information for all topology, in order to decrease the traffic amount needed to forward or travel between the routers. OSPF is used for any network diameter, such as Router Information Protocol (RIP), due to its advantages in handling large networks performance. In a network, all links have numerical weights; therefore, each router functions as a root and then computes the shortest path tree with itself to destinations in the network. In this figure also, we have showed the placements for the CCA techniques and how to operate with the routing protocol in the router processor to make the performance for the networks better for receiving and transmission data. The Cloud Computing Alarm technique is based on each manager node with its connected service nodes, as shown in Figure 3. After each manager node connected with the service node, then it will start to send small inquiries packets to check which node is available to receive the demand task from the user. Once all service nodes received the packets, then they will reply “yes” or “no” to indicate their availability for taking the task. Figure 5 presents the flowchart for the CCA techniques and how the nodes will act with the messages once they are received. Figure 4 illustrates the router model when a routing protocol such as OSPF is running in its processor. OSPF in the route processor starts to receive Link State Advertisements (LSAs) to precede them to construct the link state database. Through link state database, the OSPF invests the information (which is in the link state database) to perform and compute Shortest Path First (SPF) calculations, which leads to the creation of the Forwarding Information Base (FIB). Through FIB, the line card can determine the next hop used to forward the packets to the outgoing interface. By network operator, each link will assign a weight and the shortest path is computed by using these weights as lengths of link. As we shown in the flowchart, we are making the CCA techniques at the same level in the router layers to cooperate with the routing protocols.

While many proposed solutions intend to make load balancing between nodes using various techniques, the CCA technique has been built to achieve robustness and fast service within a short time by finding the available nodes. The results show that many tasks are distributed randomly without any planning. According to that, CCA technique is trying to find the path for the free node via the inquiry packets between the manager node and its service node. If there are two nodes available, then the first inquiry packets from any of them will be received by the manager node, which will then assign this task to the first replying one, regardless of the location of the node. The main goal of CCA technique is to reduce delay time for finding the free node to execute and complete the required task. This section illustrates that the core principle of CCA technique is that once the network routers receive notification of the free and available service node, the CCA technique in turn assigns the task via manager node to the best available node.

Additionally, CCA technique approach avoids distribution of other tasks already executed by other service nodes. CCA technique is invoked by the manager node, which can determine after receiving the reply packets from the service node which one can take the task and start to be available for the user. The nodes must regularly update their status once they have completed tasks.

The mechanism of CCA technique has been illustrated in the flowchart as shown in Figure 5 to show the mechanism of selected node. The technique selected the free nodes as the following steps:(i)Manager node broadcasts the enquiry packets for the services node.(ii)Services node checks their availability and then sends acknowledged message to the manager node (“yes” or “no”).(iii)Manager node according to the ACK message will decide which node will take the coming task which is required by the users.(iv)The manager node will keep sending these messages frequently to keep updating node service availability in the network. If there is no positive answer, then the manager node will not accept this task, although, a huge number of service nodes are still available in the cloud to complete the tasks for the users.(v)If all answers are negative, the nodes will keep searching until manager node finds the free service node. The codes in Algorithm 1 represent these steps in algorithmic codes.

Begin
for all
do for all
do if
then Enqueue
while and
do for all
do if
then if
then
End
: The routing table
: The vertex set of graph
: The set of adjacent vertices to a vertex : The path connecting the vertex to as in :
: The set of all generated alternative paths
: A queue of couble (paths, vertex)
Enqueue: Insert an element in a queue Dequeue: Removes an element from a queue : The element at the front of a queue

5. Cloud Modeling and Theoretical Analysis

In cloud computing, the users require to execute some tasks at the same time. The theoretical analysis by using the mathematical method leads to showing how the technique can improve the services even if there are extra inquiry packets that will be sent in the network. In our analysis, we used three equations as follows [20]: Hence, if we assume that the CCA inquiry packets will be sent every 30 minutes, then it can be considered a long time. Therefore, we can make some more experiments to send this inquiry packets in less time to find the optimization time number and update the states for all node services to the node manager without making any congestion in the network. Due to change in time for the inquiry packets, this might produce many problems based on the node service numbers. The header packet for inquiry packets includes two fields: (i)ret: defining the header for the inquiry packets which can contain “0” or “1”: “0” known as a negative answer and “1” as a positive answer(ii)Time intervalThe char “ret” is going to be set to “0” if the packet is on its way from the sender to the node which is being inquired, while it is going to be set to “1” on its way back. The double “send time” is a time stamp that is set on the packet when it is sent and which is later used to calculate the round-trip-time. In the following model, we will prove that the inquiry packet will not make any load on each link on the path between source and destination. Assume that there are two nodes A and B; the source will be A and B is the destination; the link capacity is 1 Mb; and the packet size = 1000 kb. We will configure the CCA technique to work on this cloud computing topology and the rate = 500 kb. We will compute the utilization of link with inquiry packet, and we can see the difference between them [17]:Inquiry packets size is 16 bits, and we compute first the inquiry packet with assuming the interval time as 30 min: link utilization% = [16] As we can see, the inquiry size is negligible even if we make the time interval less than 30 min. In case, the link utilization = 100% that will avoid the inquiry packets to send during that until the utilization is less than 100%. In this case, we decrease the time interval to <30 min; the utilization of link will be also negligible as we showed above.

5.1. Research with Comparative Study of CCA Technique

While CCA technique created inquiry messages to check the nodes service availability and update the node manager about each node service current state, we can compare the cloud computing networks with and without CCA technique of traditional cloud networks. Our study is different compared to other related studies because it is focusing on the full availability for the node services. Also, the inquiries packets that are sent periodically have proved that they do not consume a huge amount of network bandwidth and do not degrade the network performance regarding their small header size as shown in Section 6. Moreover, on some other related works as we mentioned above, some researchers made algorithms to make load balancing and distributed the same job tasks to many node services. It is a good idea but cannot guarantee that the job can executed with less delay or better time. This is because the job task should work with all node services synchronously, and any delay from any node will lead to increasing the delay time and congestion might occurr.

6. Research Environment and Results

A network simulation (NS2) was performed to evaluate the performance of the proposed technique in the cloud computing network. The simulation results of the CAA techniques show the enhancements of distributed the job tasks for the node services. The evidence gathered by the NS2 simulation offered good support for node distribution in cloud computing networks. In the experiments, we make each manager node send or receive a packet to or from its node services to check which nodes are free to receive and start to execute the requirement task. Before evaluating the performance issues of cloud computing network topology with respect to the node manager and how many node services it has, it is important to determine the network’s parameters that could affect the QoS of other job tasks. Here, the research focuses on two parameters, which may better reveal the effect of video traffic techniques: Task Deliver Time, the packet duration between sending and receiving data inquiry packets; Average Executed Time, the average time to complete the tasks for each node. In order to evaluate the effect of increasing the node manager or node services, it is necessary to recognize the high number of users that can be joined and looking to complete more tasks.

6.1. Results

Once the CCA technique starts to work in the networks, then the network has improved to receive and execute the job tasks, as shown in Figure 6. The results show that the cloud network with CCA technique executes more tasks. This is because the distribution of the tasks from the node manager to the node service is not random and happens based on the checking list for the manager node and which manager node has a free service node that can start to execute. When the free service node starts to execute the application task for the user and it has no other task, then the response for the task will be faster. Figure 7 shows the results for the cloud computing network with and without CCA technique. It can be seen that the node manager takes less time to distribute the jobs to the node service because the inquiry packets aid the node manager to determine the free node in the node manager DB’s information and leads them to distribute all tasks to the node services in less time. On the other hand, Figure 8 shows that broadcasting messages will increase overheads and utilization between nodes in the network. Moreover, the node services are consuming more BW comparing with the node manager because each node manager will contain a very large number of node services. In this figure, the utilization for the CCA technique is still in a good situation regarding the numbers, as shown in the graph. In this case, we focus on the node manager results because of their responsibilities in task distributions. Figure 10 shows the available nodes after the inquiry packets have been sent to all service nodes. The figure shows the results with different numbers of node managers. As in reality, when the number of node managers is increased then it will take more time to update their information about node services. So the service nodes will be more active and their availability will be increased. Ultimately, the results show a small difference, whereby when the node manager number increases, then the service will be better. If more users join the network and they require more job tasks to execute, the cloud computing with more node managers will react with better QoS regardless of the number of inquiry packets already sent.

Based on Figure 9, we compared the completed task with and without CCA technique for the same cloud computing networks. All comparisons used the same environment as that used in the previous results. In CCA, the tasks were completed with small enhancement time when the cloud computing network working was without it. Additionally, CCA performs better with less time when the node managers are increased. A better enhancement for execution and completion of the required tasks from the user was achieved, as shown in this figure, because the nodes manager directed the task directly to the available nodes without any delay, based on the updates information.

7. Conclusions

This paper introduced a new technique called Cloud Computing Alarm (CCA) mechanism. When the user logs in to do a task through the service offered by cloud computing networks, then this task will be directed to the free node service according to its availability, making the CCA technique work on all manager nodes. The CCA will thus enhance and increase network service performance by reducing searching time for the node manager by updating their information frequently. In addition, the CCA also improves the QoS for sensitive and high priority tasks. Future work for the CCA techniques will try to enhance the mechanism by calculating each task and how much it needs storage and time to be executed and making some sessions for load balancing if the task becomes very large.

Competing Interests

The author declares that there are no competing interests.