Abstract

We propose a hierarchical brokering architecture (HiBA) and Mobile Multicloud Networking (MMCN) feedback control framework for mobile device-centric cloud (MDC2) computing. Exploiting the MMCN framework and RESTful web-based interconnection, each tier broker probes resource state of its federation for control and management. Real-time and seamless services were developed. Case studies including intrafederation energy-aware balancing based on fuzzy feedback control and higher tier load balancing are further demonstrated to show how HiBA with MMCN relieves the embedding of algorithms when developing services. Theoretical performance model and real-world experiments both show that an MDC2 based on HiBA features better quality in terms of resource availability and network latency if it federates devices with enough resources distributed in lower tier hierarchy. The proposed HiBA realizes a development platform for MDC2 computing which is a feasible solution to User-Centric Networks (UCNs).

1. Introduction

A user possesses many mobile devices nearby to enjoy proactively provided services. Wearable and portable devices together with proximity and remote servers process large amount of requests to complete a service. Therefore, an instance of cloud computing methodology to federate these devices and servers is desired especially to the 5th-generation (5G) era that cooperation among massive devices is one of technology focuses. Moreover, the latest surveys [15] on User-Centric Network (UCN) advocate self-organizing autonomic networks in which users cooperate through sharing of network services and resources. The self-organizing autonomic architecture, no matter whether ad hoc or infrastructure-based, federates nominal low cost devices and makes users a new type of resource provider and stakeholder in addition to content consumer or producer. Based on sociological relations, mobile devices proactively join the UCN and become network elements (such as router or gateway) to share bandwidth among themselves and to obtain more reliable network connection allowing free roaming. The user-provided networks (UPNs) proposed in [2] are one of the examples. The contribution [3] based on software-defined networking (SDN) explores cooperation among wireless mobile devices to share network transmission bandwidth. Routing, opportunistic relaying, and sensing of hybrid types of information [5] through spontaneous networks [4] are projection of user behaviors in their social networks.

The UCNs emphasize three main key techniques [1]: (1) understanding user context, (2) profiling and predicting user interests, and (3) personalizing content delivery. Therefore, the state observation of UCN underlying a specific service is essential for constructing control and management planes. However, depending on resource distribution among attending mobile devices and data centers, the state space can be too large to be observed. Then, design of control and management algorithms becomes complex and challenging.

In this paper, we propose HiBA architecture with hierarchical brokering and feedback control framework MMCN for both control and management planes in UCNs. The idea comes from the hierarchical fuzzy control [6] where a complex control problem with huge state space is conquered. In a hierarchical control system, the cross product of state spaces of all control hierarchies remains huge while each hierarchy adopts only a few fuzzy rules to perform simple control tasks.

A series of contributions adopting MDC2 for improving load balance [7], user-centric security [8], access control [9] in the cloud as well as MDC2 applications of real-time seamless video sharing [8], and sociological ontology-based hybrid recommendation [10] were proposed. In these contributions, we see that MDC2 is a feasible solution to provide services in UCN. In this paper, HiBA with feedback control framework generalizes the brokering and state observation. Availability and network latency are analyzed and experimental HiBA comprising real-world mobile devices and servers are realized. We prove that a UCN with huge state space is observable and hence controllable through HiBA cloud networking. In this paper, we further extend our previous presentation [11] to demonstrate algorithm embedding using the MMCN development platform in both lower and higher tiers of HiBA. We study energy-aware balancing in the cloudlet tier where dense mobile devices federate. In addition, we study the load balancing in bigger data centers where large amount of requests usually arrives in a short time. Through both theoretical analysis and real-world experiments, we conclude that a HiBA federation with MMCN is a platform for developing distributed algorithms in future 5G’s massive machine-to-machine (M2M) communications and UCNs.

This paper is organized as follows. Section 2 depicts the HiBA architecture and the feedback control framework. Mathematical analysis on availability and network latency is provided in Section 3. Section 4 demonstrates real-world experiments in which the HiBA federates heterogeneous mobile and fixed devices in three tiers using different network interfaces. Energy-aware balancing for a cloudlet and load balancing for a bigger data center are both conducted. We also present conclusions at the end of Section 5.

2. System Architecture

Unlike traditional data centers that span trees of VMs in a top-down manner from a primary computing domain, usually called domain zero, in a physical machine (PM), the proposed HiBA architecture is initiated from physical mobile devices at the bottom tier, called the cloudlet tier. Each broker dynamically connects and adaptively controls lower tier PMs or VMs and thus can avoid drawbacks, such as the degradation of the aggregate available bandwidth and circuitous communication path of VM spanning trees [12]. The system includes resources in the cloudlets then associates cloudlets with private and public clouds to further scale up the cloud federation. On the management aspect, we exploit adaptive feedback control framework to tackle the uncertain dynamics of mobile ad hoc networks in order to adapt to the dynamics such as dynamic joining and leaving a broker’s domain caused by user mobility, device failure, or physical network handoff [8].

2.1. Hierarchical Brokering

Figure 1 shows the HiBA architecture, which is comprised of four tiers. Tier 0, called the cloudlet tier, contains groups of mobile user devices. Each cloudlet is managed by a cloudlet broker maintaining network connectivity and member association and also performing QoS control. A cloudlet is dynamically organized by the broker according to resource instant status including CPU, memory, network capability, and energy consumption. The resources underlying a tier is abstracted as a JSON or XML list in its broker’s database accessible to federation members through RESTful API. Thus, each entry of the list includes a URL containing available RESTful command or a URL of filename and file attributes. Fields of security configurations, NAT port, and GPS coordinates are optionally included in each entry [8]. A tier-0 device does not have a VM member except itself and it aggregates its resources to be shared with other members in the resource list. Each broker hierarchically aggregates the resource lists from members and uploads updates to its upper tier broker.

We continue using the cloudlet federation procedure in our previous research [8]. A cloudlet broker is either dynamically elected among user devices according to the resources status or selected by its upper tier (tier 1) broker, called cloudling broker. A cloudling broker is a small data center or private cloud proximate to a cluster of cloudlets that associate with the cloudling. It is similar to the small data center proposed in [13, 14], owned by a smaller business unit such as a coffee shop or a clinic. We differ from [13, 14] in that our cloudlets are autonomously grouped by user devices which also share resources with others. A tier- broker associates with a tier- broker to scale the federation in depth or alternatively to include more tier- federations to scale in width. In the proposed architecture, each broker only recognizes next lower tier brokers while further lower tiers are transparent to it. Furthermore, each broker has the same feedback control framework for managing lower tier devices’ network attachments and resource associations as well as for QoS control on requesting and executing services.

2.2. Feedback Control Framework

Figure 2 shows the feedback control framework MMCN of each broker in the HiBA virtualized network. The feedback loop is used to adaptively manage and control the cloud federation governed by the broker. This management includes network attachments and resource associations caused by user mobility and VM migration [8]. In future application, the framework is ready for developing hierarchical control algorithms such as request differentiation and task scheduling as well as real-time QoS control and job tracking once a service is started. A feedback control loop comprises three subsystems including Cloud Inspection Subsystem (CIS), Cloud Control Subsystem (CCS), and Cloud Execution Subsystem (CES) which are analogous to observer, controller, and plant, respectively, in a classic feedback control system. When performing real-time control for QoS, multiple lower tier instances of the feedback loop are allocated. A tier- CCS determines the number of tier- instances to be allocated; then the tier- CES invokes the required instances each of which is constituted by the other set of CIS, CCS, and CES at the tier-.

A CCS processes tasks assigned by its upper tier broker and processes requests from lower tiers. Then according to the database and states tracked by the CIS subsystem, it determines the controls including admission of member joining, allocating member federations for task execution, schedules of tasks, and service level agreements (SLAs) of tasks to be assigned to member federations. A number of CES instances physically accomplish the network attachment for the admission control as well as the recruiting of members associated with corresponding SLAs to complete current schedule. A CCS produces these determinations as controls to CES subsystems while the executions are left to CESs and associated member federations. The CIS observes the state updates from lower tiers for CCS’s reference when determining these controls.

2.3. Algorithm Embedding

It is obvious that this framework is feasible for future development of hierarchical control and management algorithms including admission control, request differentiation, task partition, adaptive resource allocation, and dynamic task scheduling regarding SLAs assigned by upper tier broker. For example, on receiving a new request, a priority queue of requests is adapted with new set of priorities. The CCS checks with the CIS to determine whether to forward the request to upper tier broker or to reduce the request into smaller tasks such that the CESs can further assign the task partitions to lower tier members just as the processing of task assignments from upper tier broker’s CES. The CES’s makes lower tiers transparent to the CCS and upper tiers as if it is the single plant of the CCS controller. Since it is the CESs and associated member federations executing tasks rather than the CCS, the CCS is able to process the management and control algorithms without waiting for the end of the current job execution. This is easy to be realized by programming multiple threads each of which realizes CCS, CESs, and CIS subsystems.

In summary, the CCS of a tier- broker forwards requests to tier- if it is not able to process them according to the CIS database. The tier- broker assigns jobs to other tier- brokers through CES if the requested resources are sharable in these tier- members and below. Therefore, the sharing becomes intercloud service of tier- as well as intracloud service of tier-.

3. Performance Analysis

The baseline performance for benchmarking is to evaluate the HiBA architecture and feedback control framework without optimization on specific management and control. That is, the queues are simple FIFOs without priority adaptation and all devices in all tiers randomly generate requests. The performance model of the virtualized network is shown in Figure 3. Suppose that the maximum number of members of a HiBA broker is ; a broker in tier has computing capability , which is also the minimum length of the Task Queue. The capability also represents the maximum number of tasks that can be processed by a broker in tier under the condition that no task is dropped. The resources are assumed to be uniformly distributed in each tier. In tier , the average amount of contents and resources is . A tier- broker has its amount of local requests in a Poisson distribution with mean . This simulates the reinitiated requests caused by request partitions and task handoffs. denotes the mean initial latency caused by network delay to forward requests and the waiting time that requests stay in the Request Queue until being partitioned into smaller tasks. In this paper, we derive baseline performances on availability and latency.

3.1. Resource Availability

We define availability, , as the ratio of computing capability over the number of accepted requests to a broker in tier . That is,In a HiBA architecture of tiers, the expected total resource amount summing from the records in the CIS databases is approximated assupposing that . Thus, the expected total resource amount of the subtree rooted at a tier- node isFor any request, the probability that the requested resource is available in local node is approximated bywhere is the set of resources in node . Suppose that the sample space is the union of in the whole HiBA architecture. Then, the worst case of availability occurs when network latency is much smaller than the mean time interval between two consequent requests. That is, requests from other cloud federations including those in lower and higher tiers arrive in current tier instantly with negligible network delay. Thus, the maximal number of requests arriving node is

Then we obtain the worst case availability asFrom (6), we see that when is smaller, resources (including contents) are more centralized with larger and if remains large, this causes lower availability. Thus there are various means to increase availability. One is to increase capability , though this will increase costs. Alternatively, branch amount of each tier can be increased, though this also will increase computing capability and costs. Third, by adopting HiBA, we put user devices in tier 0 or tier 1 (once the user device is elected cloudlet broker). However, the capabilities and of thin devices may be small, and so and are also expected to be small. This indicates that availability remains close to 1 if resource distribution is proportional to the capability with the ratio of total number of requests over total resource amount. Therefore, the optimal resource distribution to low tier brokers and mobile devices both offloads data centers’ computing overhead and increases user satisfaction.

3.2. Latency

Social networks can cause locality, such that many requests do not travel to their destinations over long distances in terms of either network relay counts or the terrestrial radio barriers. Thus initial latency of a service is reduced. Lower tier brokers have smaller resource granules, while locality based on social network provides access proximity and thus results in shorter latency. We estimate the latency of a request traversing the HiBA architecture as follows. Suppose that a request initiated from a tier- node arrives at its destination possessing the required resources in tier . The expected latency of the request iswhere is the probability that the requested resource is not found in tiers lower than and , , are latencies caused, respectively, by network communication, request queuing plus database querying, and computing for the brokering at each tier- broker on the path to tier . Supposing that request arrival is independent of resource distribution, we haveThus, the estimated latency isIf a request is originated from a lower tier user device, that is, or 1, the effective way to reduce expected latency or is to reduce . This means increasing by distributing resources to lower tiers, that is, increasing for . However, this also increases both database searching time and computation time . Considering network delay , we can expect that, in the virtualized network, the physical distance of a cloud server is increasing as is getting larger. A high tier link in the virtualized HiBA network actually contains many more physical relaying hops than a lower tier link. If request is sent using TCP protocol, the network latency is increasing much more than that caused by database searching and brokerage computing, as increases. From (9), we still effectively decrease the latency by increasing . This can be expected especially for multimedia content sharing since, according to social network relations, contents are stored in proximate cloudlets or cloudlings which are lower tiers (small ).

3.3. Development Platform for MDC2 Control and Management

From the availability and latency analysis, we see that a HiBA with feedback framework is a development platform. It is easy to observe the performance when developing algorithms for admission control, request differentiation, task partition, adaptive resource allocation, and dynamic task scheduling regarding SLAs assigned by upper tier broker. For example, the admission control in federating a tier affects resource distribution and according to the number of network hops between a broker and a member, the network latency in (9) also differs. The request differentiation and queuing mechanism directly affect since includes the queue delay that a request stays in the Request Queue. Task partitioning, resource allocation, and scheduling further affect the efficiency processing requests and consequently they further affect and resultant . Globally, they also affect arrival rate at tier . Exploiting hierarchical feedback control, it is easy to ensure availability by performing proper management while tracking the latency by tuning control regarding the deadline specified in the SLA of each task.

4. Case Studies

To demonstrate algorithm embedding using the MMCN development platform in both lower and higher tiers, we study energy-aware balancing among mobile devices in the cloudlet tier as well as the load balancing in bigger data centers where large amount of requests would arrive in a short time, that is, large . The energy-aware balancing is critical in crowd-sensing applications especially when the sensing data amount is large such as using cameras for image data collecting. The load balancing is essential for bigger data centers especially when database access frequency is high. Both of the balancing algorithms are effective in the Industry 4.0 era when crowd-sensing and big data analytics are deployed.

4.1. Energy-Aware Balancing

The energy-aware balancing is based on unsupervised fuzzy feedback control where the reference command is also adapted according to the feedback state of the federation self-organized by mobile devices. The principal idea comes from the energy proportional routing [15] that the lifetime of a clustering-based sensor network is prolonged if member nodes’ proportions of consumed energy in the remaining are close to the cluster’s. We exploit the energy proportion of the cloudlet federation as the adaptive reference to be tracked by the fuzzy feedback control system. Therefore, the energy sharing is unsupervised because of no given objective of the control a priori.

Suppose that, in a cloudlet tier of member nodes, data transmission is observed by the cloudlet broker in each round (discrete time). We first define symbols as follows:(i): member node identification, .(ii): data transmission amount being assigned to node at round .(iii): remaining energy of node at .(iv), representing the ratio of overhead to the remaining energy (current capability). This is the indication of how node is loaded with respect to the remaining energy.(v), representing how the whole cloudlet federation is loaded.(vi), representing the difference of loading between node and the whole federation.(vii) representing how the difference changes.(viii): the ratio of data amount assigned to node at .We define fuzzy rules as follows: (R1)If is and is , then is .(R2)If is and is , then is .(R3)If is and is , then is .(R4)If is and is , then is .The membership functions for fuzzy sets , , , and are configured in Figure 4.

We realize the “and” operator with -norm “minimum.” That is, the matching degrees , , , and of premise part of rules (R1) to (R4), respectively, areObtaining the matching degrees , , , and of the four fuzzy rules, respectively, we perform the defuzzification equivalently applying the Takagi-Sugeno inference method. The inference result, adequate ratio of data amount to transmit, isby configuring the conclusion part membership functions as dynamic singletons as follows:where is the ratio tuning amount for each round. Finally, the data amount assigned to node is determined aswhere is the total data amount to be transmitted from this cloudlet at time . In the above fuzzy inference algorithm, each member node tracks the dynamic control goal and the proportion of energy to be consumed in the remaining energy approaches the proportion regarding the whole federation. Therefore, the intrafederation energy sharing is achieved by offloading transmission jobs using the proposed algorithm. When each node is a lower tier federation, the offloading is hierarchically extended through the MMCN networking where the data amount is determined and assigned by CCS and the energy status updates ’s are received and aggregated by CIS. This shows the scalability of the HiBA architecture.

4.2. Load Balancing

The load balancing is required in a bigger federation organized in a higher tier of HiBA because a higher federation always has larger amount of request arrivals. As shown in Figure 5, the framework of the proposed load-balanced cloud service interface consists of three components which realizes CES, CCS, and CIS, respectively. The details of components are presented as follows:(1)Cloud Service Interface Node (CSIN): it is a virtual machine that provides cloud service interface and is denoted as in Figure 5. A CSIN can receive members’ requests and obtain results by interacting with requested cloud service module. In our design, each cloud service interface in CSIN is implemented as a RESTful API for universally communicating with different types of mobile devices. A CSIN is then the realization of CES in the MMCN framework.(2)CSI manager: it is designed to manage the usage status of CSINs and matches mobile users to CSINs. The efficiency of the load balance depends on the matching method of the CSI Manager. A CSI manager is then the realization of CCS in the MMCN framework.(3)CSI allocation table: it maintains the serving status between mobile users and CSINs and is used by the CSI manager for assisting the matching decision for the new-arrived mobile users. This table is stored in the cloud database. A CSI allocation table is then the realization of CIS in the MMCN framework.The processing flow of using a cloud service for a mobile user is designed for achieving the load balance. The idea of committing requests requires three phases: (1) service registration (Steps  (1)-(2)), (2) service execution (Steps  (3)–(7)), and (3) service deregistration (Steps  (8)-(9)). The detailed steps, also illustrated in Figure 5, are presented as follows:

(1)Mobile user sends a request to CSI manager for allocating a CSI address.(2)CSI manager searches for a CSI machine with least number of serving users, say, , in the CSI allocation table, and then registers (, ) into the CSI allocation table.(3)CSI manager informs that can serve his/her cloud service requests in the following period.(4) configures the cloud service interface by replacing the IP attribute in the service template with the IP address of .(5) connects to through the configured URL of the cloud service interface and sends a request.(6) delivers the request to the associated cloud service which may access the cloud databases if required. After processing, the results will be sent back to .(7) sends the results to . (If more requests are required to processed, Steps  (3)–(7) are repeated.)(8) deregisters to CSI manager.(9)CSI manager removes the registration record of from the CSI allocation table.Notice that the matching rule for a new mobile user and CSINs in the current design uses least-user-first basis, as shown in Step (2). The matching principle can be modified according to different demands. In the experiments, we will show that the current design already obtains splendid performance. More study on customizing matching methods is one of our future research directions. That is, although five machines are used in our CSI (more than the single-machine CSI), the improvement is by a factor of one hundred, indicating the superior performance of our proposed load-balanced CSI mechanism under the HiBA architecture.

5. Real-World Experiments

5.1. HiBA Baseline Performance

We conduct an experiment demonstrating performance difference in terms of availability and latency when distributions of virtualized resource instances (VRIs) differ. We leave development of enhancing algorithms for specific purposes in the CCS as future work which will seek performance in terms of various metrics. A total of 50,000 VRIs are distributed in a 3-tier HiBA federation comprising Android smart phones, tablets, and VMs in desktop PCs. Figure 6 is the HiBA topology and related specifications of the machines are shown in Table 1. Each device is labeled if it is the th node at tier . Device (1, 3) connects to (2, 0) through a 3G access point with VPN tunneling. Device (0, 11) connects to (1, 0) with USB and shares Internet connection from (1, 0). These machines have different computing ability, though request and task queues being of the same size of 20 to preserve the differences of computing ability. Requests are randomly generated in each device. Mean request arrivals () are 3, 1.5, and 1 which equivalently mean that request intervals () are 333 ms, 666 ms, and 1 s, respectively. The resource distributions have four cases. The first case is that the tier-2 device possess all the 50,000 VRIs while 16,000 VRIs are duplicated to each tier-1 federations. In the remaining three cases, we, respectively, duplicate 28000, 32000, and 36000 VRIs to each tier-1 federation. The experiment is conducted in a heterogeneous networking environment connecting machines by different communication technologies shown in Table 1.

The results of the experiment are shown in Figures 7 and 8. Both Figures 7 and 8 reveal that distributing resources in lower tiers provides better performance. Figures 8(a) and 8(b) show the latency differences caused by the resource distributions that 16000, 32000, and 40000 VRIs are duplicated in each cloudling tier. The results before all brokers fully loaded are also given with respective zoom-in charts. When devices all continuously issue requests at a high frequency (3 requests per second), the queue delays are obviously high and we see a few requests that are dropped. Requests from tier-1 devices have lower loss rate since the resources requested are obtained in a shorter time. For each physical machine, the CPU utilization of the MMCN brokering itself is smaller than 2% depending on the number of cores. The mean brokering computation and database querying time is estimated to be about 500 ms while the WLAN delay time is smaller than 5 ms. In the heterogeneous network case, the latency difference will be larger if upper tiers are physically at a long distance from the cloudlings and cloudlets. If the request interval is higher than the processing time, the latency tends to be smaller. When more VRIs are duplicated in lower tier devices, we also see that the latencies are smaller.

In this paper, we leave algorithms, such as request partition, network embedding, and federation for specific performance metrics, as future work. Instead, we implement real-world HiBA to prove that, with feedback framework, it is a new design paradigm and development platform for these algorithms.

5.2. Energy-Aware Balancing

In the energy-aware balancing experiment we exploit four mobile devices of different types from different manufacturers. The cloudlet is self-organized according to the broker election protocol in [8]. The elected broker is ASUS-Fonepad. The other hierarchical federations are in Figure 6. Each device, including the broker itself, in each round updates its energy status to the CIS of the broker (ASUS-Fonepad). Each mobile device in the cloudlet is recharged to 100% of respective battery energy capacity. The data and total data amount to be transmitted in each round are randomly generated with mean of 2.4 GB prior to the experiment. For each round , total data amount is the same for both balancing and nonbalancing cases. In nonbalancing case, the data amounts are evenly assigned to members. In balancing case, the data amounts are determined by the fuzzy controller in Section 4.1. The data amount assignment is of unit 100 MB. When any device has remaining energy lower or equal to 40%, the experiment is terminated. The initial singletons are configured as , , and . The result of energy-aware balancing is shown in Figure 9. The respective control systems’ behaviors are in Figure 10. The fuzzy controllers of mobile devices all track the federation’s data amount to energy proportion (). We see that no matter how the performance of respective device varies, a device with better transmission capability will share its energy with others. Tablet PCs are usually with higher capacity batteries. Without energy sharing, the remaining energy of a tablet drops more slowly than a mobile phone. With energy sharing, tablets undertake more transmission jobs and become with higher energy drop rate, and the mobile phones become with lower energy drop rate. However, we see that the whole cloudlet federation has longer lifetime. The experiment can also be applied to energy consuming tasks in addition to data transmission.

5.3. Load Balancing

The load-balanced cloud service interface (CSI) in the form of RESTful APIs is implemented by using Jersey and is deployed on Microsoft Windows Azure public cloud platform. We conduct an experiment for comparing the proposed CSI to that of single machine. In our experiment, five CSI machines are used to share the requests from users. In the experiment, the data producing rate for each user is 1 request per second. The performance metric is the missing rate defined as , where is the number of total requests and is the number of successful requests. Table 2 shows the experimental results under different amount of users from 100 to 300. We can see that when the number of users is low (100 users), both CSI designs can successfully process their data requests. As the number of users increases (e.g., 300 users), our proposed CSI can almost process it with only 0 : 27% missing rate; however, over 27% requests are failed to deliver data to cloud database in the CSI of single machine.

6. Conclusion

We have proposed and implemented a mobile device-centric cloud computing architecture, HiBA, based on feedback control framework. The proposed hierarchical brokering architecture is self-organized featuring scalability and hierarchical autonomy and is easy to embed management and control algorithms to develop a federation with better performance in terms of availability and latency. Mobile devices and small servers federate into cloudlets and cloudlings of hierarchical tiers such that mobile devices not only request services but also provide resources required by diverse services. The implemented HiBA architecture with feedback control framework proves improved performance when resources are adequately distributed to lower tier federations rather than being centralized in a remote cloud server. Through two case studies of cloudlet energy-aware balancing and bigger data center load balancing, we show that the HiBA is actually a development platform for mobile cloud computing and provides a feasible solution to UCN services. Future works include optimization algorithms embedding and big data analytics applications with crowd sensing are to be continued.

Competing Interests

The authors declare that they have no competing interests.

Acknowledgments

Thanks are due to the Ministry of Science and Technology (MOST, which was the National Science Council) in Taiwan for sponsoring this research under continuous funded Projects NSC101-2221-E-327-020-, NSC101-2221-E-327-022-, NSC102-2218-E-327-002-, MOST103-2221-E-327-048-, MOST105-2221-E-327-027-, and MOST105-2221-E-006-141-.