Abstract

Service composition optimization is one of the core issues in cloud manufacturing research. However, all current studies of service composition in cloud manufacturing assume that tasks have been decomposed into subtasks, so they can be directly mapped to existing services. However, due to the complexity, diversity, and multilevel of services in cloud manufacturing, services have different granularity. Therefore, the matching between tasks and services does not always occur at the lowest level. For solving the problem of discontinuity between task decomposition and service composition, this paper considers the characteristics of existing services in the cloud pool and proposes a task decomposition strategy based on task/service matching on the basis of refining the description model of tasks and services. Then, for the decomposed subtask set, the E-CARGO model is used to model the optimal composition process of services, and CPLEX is used to solve the model. Practical cases show that the proposed task decomposition strategy can solve the problem of discontinuity between task decomposition and service composition without relying on more expert systems. In addition, the proposed service composition model is more flexible, can easily model more variable factors, and CPLEX can solve the model more quickly and stably.

1. Introduction

Recently, with the promotion of emerging technologies such as cloud computing, Internet of things (IOT), network physical system (CPS), big data analysis, and artificial intelligence, a new industrial manufacturing mode with the core characteristics of globalization, personalization, digitization, cloud computing, collaboration and integration, namely cloud manufacturing, has been proposed [17]. As an emerging manufacturing model, cloud manufacturing can effectively solve the problems of shortage and idleness of manufacturing resources, deficiency, and excess of manufacturing capacity in China's manufacturing industry, and realize a manufacturing-oriented service model characterized by sharing, collaboration and on-demand use, which can provide a new impetus for the transformation and upgrading of manufacturing industry [8, 9].

Up to now, cloud manufacturing has attracted a large number of scholars around the world, and many problems that need to be solved in the future are proposed and discussed, such as the architecture of cloud manufacturing service platform, business model, description of manufacturing resources, optimal allocation of resources, etc. [9, 10]. Among these, optimal allocation of manufacturing resources is the core function of cloud manufacturing, mainly including two core processes: task decomposition and service composition.

Task decomposition is the basis and premise of service composition. Its goal is to obtain a highly cohesive task sequence to ensure that different service providers can cooperate to fulfill the customers' requirements. However, in the current research, task decomposition and service composition are usually carried out as two independent and irrelevant procedures. On the one hand, task decomposition heavily relies on industry expert systems and lacks consideration of the existing services’ status in the cloud pool, which makes it difficult to keep up with the changing market demand; on the other hand, the existing service composition research lacks the modeling of collaboration between services, which has a certain impact on the overall execution efficiency of manufacturing tasks. Therefore, how to integrate task decomposition and service composition together is the core issue of this paper.

As an intermediate procedure between task decomposition and service composition, service matching is mainly used to connect manufacturing tasks and services to maximize the satisfaction of both supply and demand. How to effectively use service matching to solve the problem of discontinuity between task decomposition and service composition is one core problem to be solved in this paper. In addition, the E-CARGO model, which was proposed by Professor H. B. Zhu in 2006, is used to describe a role-based collaboration system, and how to use it to model the collaboration between services to improve the practicability of the service composition model is another problem to be solved in this paper.

The main contributions of this paper are as follows:(1)According to the characteristics of candidate service sets obtained after task service matching, specific computable formulas are given for the internal competition within candidate service sets and the collaboration and dependence between candidate service sets.(2)Considering the service state, a new task decomposition algorithm based on task/service matching is proposed, which can combine the two processes of task decomposition and task/service matching and reduce the dependence on the expert system.(3)A new cloud manufacturing service composition model based on role collaboration is constructed, where the mapping relationship between the service composition problem and the E-CARGO model is established. This can not only solve the problem that the heuristic algorithm is easy to fall into local optimization but also facilitate the introduction of multiple variable factors to expand and optimize the model.

The remainder of this paper is organized as follows. Related work is described in Section 2. Section 3 describes the core problems related to the optimal allocation of manufacturing resources. Section 4 gives the description model of tasks and services, introduces the task/service matching based task decomposition algorithm, and describes the subtask reorganization algorithm combined with service characteristics. Section 5 formalizes the service composition approach by utilizing the E-CARGO model and the corresponding calculation methods. Section 6 analyzes and verifies the proposed methods through application case and performance analysis. Finally, the conclusions appear in Section 7.

As the core part of implementing cloud manufacturing, manufacturing resource optimized allocation aims to provide a set of capabilities/services for satisfying personalized manufacturing demands through a process of resource composition and optimal selection. Efficient shared manufacturing resource allocation can not only achieve rapid response to diverse manufacturing demands but also facilitate full-scale sharing of enterprises’ resources.

Task decomposition is one of the most important preprocessing stages of manufacturing resource optimized allocation while is a challenging task, not only because they are characterized by cross-industry heterogeneity, but also because the decomposition process need considering the state of service (such as service granularity and relevance).

In the existing research on task decomposition, some methods used a design structure matrix (DSM) to express the interaction information and correlation degree between tasks [1113]. Kherbachi et al. used DSM to cluster the tasks in product development and matched the corresponding development tasks to the appropriate research group and development group [14]. Liu and Zhou proposed a method of task decomposition and reorganization based on DSM combined with adjustable task granularity [15]. However, DSM has shortcomings in both the quantitative analysis ability of uncertain information and the decomposition ability of complex tasks. In [16], Shriyam et al. proposed an approach based on a dynamic grid for decomposing exploration tasks among multiple Unmanned Surface Vehicles (USVs) in port regions. In other ways, the task is decomposed into subtasks with appropriate granularity according to the task hierarchy and task correlation. In [17], Zhang et al. constructed a global manufacturing business process network (GMBPN) according to the input and output relationships of manufacturing business activities. Based on the GMBPN, they further presented a two-phase decomposition algorithm. Hu et al. divided the complex manufacturing tasks into multiple stages according to the attributes and characteristics of the production process and proposed a novel hybrid method combining depth first search, fast modular, and artificial bee colony to optimize multistage production processes [18]. To solve complex parts machining problems in CMfg, Guo et al. presented a machining task decomposition strategy that uses features of the complex part as task granularity [19]. Liu et al. proposed an ordered task decomposition method (task decomposition method based on hierarchical task network) considering task granularity, cohesion, and correlation [20].

The above literature studies the task decomposition strategy from different levels and perspectives, but they only use the characteristics of the task itself or the correlation between tasks to decompose the task and take no consideration of the service state, which results in the problem that the process of task decomposition is divorced from that of matching and assignment of task and service. To solve the above problems, Yi et al. [21] decomposed the manufacturing task into atomic tasks according to the predefined decomposition rules and then reorganized these atomic tasks using clustering algorithms by considering task correlation, matching degree of tasks and services, and competition between services. Although the algorithm considers the state of the service, it inverts the dependency between the task and the service. On the one hand, the decomposition process of atomic tasks will rely heavily on expert systems, and then it is difficult to realize a comprehensive expert system for massive heterogeneous tasks in different industries and categories. On the other hand, even if atomic subtasks can be successfully decomposed, there is a high probability that atomic subtasks cannot match the appropriate candidate service set.

Service composition is another core function in the process of resource optimized allocation. Its main purpose is to select a group of optimal service combinations from the candidate services of each subtask to complete the demander’s manufacturing task. In essence, this process is the process of combining multiple services (atomic services or composite services) into value-added services to complete one or a group of tasks. As a typical multiobjective and multiconstraint NP-hard problem, a large number of metaheuristic algorithms, such as genetic algorithm, particle swarm optimization algorithm, and ant colony algorithm, have been proposed to find the optimal or nearly optimal combination scheme in a reasonable time [2231]. The typical processes of these algorithms are (1) propose heuristic algorithms for fixed models and (2) repeatedly test and adjust the heuristic algorithms to obtain the required performance. If some aspects of the service (such as service availability or service quality) change, this process must usually be repeated. It can be seen that these algorithms have a complex design process and are usually for specific problems. Therefore, they are not adaptive to the dynamic environment, which means that when the environment changes, they may need to be redesigned.

In addition to devising these algorithms, another important problem is how to establish an appropriate service composition model. As an important index to determine the quality of service composition and an important factor to be considered in the process of service composition, Quality of Service (QoS) is widely used in the modeling of service composition. Therefore, Que et al. [30] proposed the method of using the user model (M2U) to solve service composition and optimization selection and established the corresponding mathematical evaluation model by comprehensively considering the four QoS evaluation indexes (time, cost, reliability, and capability). Li et al. proposed an extended Gale-Shapley (GS) algorithm for service composition that allows the generation of multiple service composition solutions effectively, where the requirements with different constraints have been considered [31]. Considering the one-to-one mapping between basic services and subtasks, Liu and Zhang [32] freely combined multiple basic services with equivalent functions into a cooperative service group (SESG) to complete each subtask together. At the same time, the optimized structure of SESG was introduced into the QoS evaluation model, and the corresponding QoS evaluation formula was given. In [33], Jin et al. proposed a correlation-based service description model to describe the QoS dependence of a single service on other related services and then introduced a service correlation mapping model to automatically obtain the value of QoS correlation between services. Afterward, Laili et al. [34] studied the multistage integrated scheduling problem of hybrid tasks in a cloud manufacturing environment to maximize production efficiency while balancing different production task orders. The experimental results show that this method reduces the production cost and shortens the production time. In [23], Yuan et al. proposed six basic QoS indexes including time, composability, quality, availability, reliability, and cost, and determined the weight of each index value in the QoS model by using the improved fuzzy comprehensive evaluation method.

Because services and tasks in cloud manufacturing have different granularity, service composition is essentially a dynamic matching process between multigranularity tasks and services. However, most of the current research on the service combination focuses on QoS modeling and rarely considers how to add more practical constraints in practical applications to easily expand the existing models, such as cooperation and competition between related services.

3. Problem Description

Cloud manufacturing software service platform needs to solve the problems of low sharing rate of manufacturing resources, poor collaboration among enterprises, and low customization level of manufacturing solutions in the manufacturing process. Figure 1 shows its operation principle, and it mainly includes three types of user roles: resource providers, resource demanders, and platform operators. Resource providers describe and publish the manufacturing resources (which will be encapsulated as services) in the manufacturing process in a unified model; resource demanders submit their manufacturing requirements (which can also be called tasks) and access various manufacturing resources on demand with the support of the platform; platform operators mainly audit and manage the resources and requirements in the platform, release or update various templates of resources and requirements in time, and monitor the transactions between the suppliers and the demanders.

Optimal allocation of manufacturing resources is an important procedure in cloud manufacturing, and its basic workflow is shown in Figure 2:(1)Decomposing the total tasks submitted by resource demanders into subtasks for collaborative completion.(2)According to the task-service matching method, the cloud manufacturing software service platform searches and matches the candidate service sets that can complete each subtask of the initial total tasks.(3)Considering the influence factors of QoS, such as time, cost, quality, reliability, and so on, the cloud manufacturing software service platform searches for the best services in the candidate service set of each subtask to form an optimal execution plan for the initial total manufacturing task.(4)Executing and supervising the completion process of tasks according to the optimal execution plan.

Task decomposition is the most basic initialization step of optimal allocation of manufacturing resources, and its purpose is to decompose the overall manufacturing task into subtasks that should be executable and have low relevance between each other so that the cloud manufacturing software service platform can match the appropriate service to complete the user's demand. It is clear that the task decomposition results are closely related to the processing quality of the subsequent procedure of resource optimization.

In the process of task decomposition, the scale of subtasks is characterized by task granularity, which will directly affect the quality and progress of task collaboration. Specifically, the larger the granularity, the higher the task integrity, but the implementation is complex, which is not conducive to multienterprise cooperation. On the other hand, the finer the granularity, the more interaction between tasks, but the coordination is difficult, and the logistics cost and information interaction cost are prominent, which affect the quality, progress, and cost of task completion. Therefore, how to combine the characteristics of the services in the cloud pool and complete the decomposition of tasks at an appropriate granularity is of great significance in resource optimization.

After matching the manufacturing service set for each manufacturing subtask under the functional constraints, some suitable services need to be selected from each candidate set and assembled into composite services in a certain order to collaboratively complete the user's manufacturing requirements. How to build a more flexible combination model, which can easily model more variable factors to adapt to the open and dynamic manufacturing environment, is one of the urgent problems to be solved.

4. Task Decomposition Strategy

In this paper, task decomposition is divided into two stages: preliminary decomposition and reorganization. In the preliminary task decomposition stage, the total task is decomposed into executable atomic subtasks based on task/service matching. In the task reorganization stage, subtasks with the small granularity are merged into subtasks with appropriate granularity by considering the internal competition of candidate service setting, cooperation, and dependence between candidate service sets.

4.1. Description Model of Tasks and Services

Cloud manufacturing users come from different enterprise and engineering application fields, and their needs are more diversified and personalized. In addition, manufacturing resources are widely distributed in various forms and types. The description of requirements and resources by manufacturing enterprises is often unclear, incomplete, and inconsistent, which is difficult to realize the dynamic cooperation between users and resources in the cloud manufacturing environment. Therefore, a unified formal description is inevitable. Here, the requirements are modeled as manufacturing tasks, and the combination of several resources is modeled as manufacturing services to complete the specified tasks. Based on the formal description of the existing manufacturing services and manufacturing tasks (cloudservice = {ID, TypeInfo, BaseInfo, ResourceInfo, FuncInfo, AssessInfo, StatuInfo}) [35], we further extend the description of the function information FunInfo = {FunProfile, InputParam, OutputParams}, where we have the following:(1)Funprofile: function summary, which briefly describes the functions provided by the service.(2)Inputparams: the input information of the function, indicating that the demand side needs to provide necessary information or materials during the implementation of the service. For example, for a service of manufacturing a special wrench, the demand side may need to provide corresponding design drawings.(3)Outputparams: the output content of the function, which indicates the service results to be provided to the demander after the service is executed. As for a software development service, the output content may be executable source code or a one-year technical support service.

4.2. Preliminary Decomposition Method

To avoid relying too much on the industry knowledge and prevent the problem that the decomposed subtasks cannot match any appropriate services, we integrate the task decomposition and task-service matching as a whole, where we use the existing services in the cloud pool to adaptively complete the preliminary task decomposition. The basic flow of the method is shown in Algorithm 1.

Input: task waiting to be decomposed T
Output: sub-tasks the corresponding candidate service set
(1)initialize k to 0, i to 0, n to 0
(2)label T as
(3)match the service set in the cloud pool that meets the output requirements of
(4)if is not empty
(5) save and
(6) calculate the count n of the intersection of inputs of , and label it as InSect
(7) set the input of the to InSect
(8)if n is not equal to 0
(9)  set k + 1 to k
(10)  for epoch = 1, 2, …, n do
(11)   set i + 1 to i
(12)   create a new subtask whose output of Fun-Info is the i-th value of InSect
(13)   goto 3
(14)  end for
(15)  set i to 0
(16)else:
(17)  goto 11
(18)end if-else
(19)else:
(20)if k equal to 0 and i equal to 0
(21)  end algorithm
(22)else:
(23)  goto 11
(24)end if-else
(25)end if-else

Step 1. Initialize the variable , where k represents the k-th subtask waiting to be decomposed, and i is the i-th subtask of the k-th subtask. In particular, when k = 0, means the original task T, and when i = 0, means the k-th original subtask which has not been decomposed. is the corresponding candidate service set of .

Step 2. Match the service set that meets the output requirements of in the cloud pool. If the service set can be matched, proceed to Step 3, otherwise, proceed to Step 5.
The matching methods are as follows:(1)Use TF-IDF algorithm to find out the keywords of task and service j (0 ≤ j ≤ N) respectively, where N represents the number of all services in the cloud pool that have not yet participated in the matching.(2)Select several keywords from task and service j respectively, and merge them into set D.(3)Calculate the word frequency of task and service j relative to all words in set D in turn.(4)Calculate and save the cosine similarity between task and service j.(5)Sort all and calculate the average value of ranking in top M. If the average value is greater than the threshold C, the matching is regarded as successful, and the top m services are the corresponding service candidate set; otherwise, the matching is regarded as failed, where C and M are constants.

Step 3. Save task and the corresponding service set and calculate the intersection of inputs of services in , which can be labeled as InSect, and then set the InSect to be the input of .

Step 4. Create a new subtask using the content of the InSect to be the output, and then repeat Steps 2 to 4 to obtain subtask and the corresponding service candidate set , where and n is the number of InSect.

Step 5. Judge is the original task T; if it is, the algorithm is terminated; otherwise, proceed to Step 4.

4.3. Subtask Reorganization Algorithm

In the preliminary task decomposition stage, only the function matching of the service is considered, and the original task is decomposed into executable subtasks with smaller granularity, without considering the competition within the candidate set, the dependence between candidate sets and the difficulty of collaboration, which will be detrimental to the collaborative work between the final services.

Definition 1. Internal competitiveness of candidate service sets. It refers to the relative number of services available in the service candidate set corresponding to each subtask after preliminary decomposition. To preserve the competitiveness between services, formula (1) can be used for calculation in the actual calculation process:where S is the number of services in the candidate service set (that is, in the preliminary task decomposition stage, the service candidate set with the matching degree in the top s), indicates the matching degree between the ith service in the candidate set and the corresponding subtask. The higher the N, the more competitive the candidate service set.
It should be noted that in the cold start stage of the system, the number of services in the cloud pool is less, so the value of S can be set smaller. As the number increases, the value of S can be gradually increased, but the increased value should also be weighed against the efficiency of the algorithm.
The competitiveness of a candidate service set is to ensure that when a resource is selected, we can quickly find a substitute when it is unable to provide services due to unexpected circumstances. From a long-term perspective, reserving competition space will make the pricing given by manufacturing resource owners more reasonable and urge them to actively improve service quality so as to promote the healthy development of the cloud manufacturing platform [21].

Definition 2. The degree of dependency between services R, which can be expressed by the correlation between services. Considering that the correlation is mainly determined by the logistics correlation and information exchange correlation between them, to reduce the complexity of calculation, the number, and type of inputs of task dependencies can be used as parameters to calculate the dependence between service candidate sets, as shown in where M is the number of categories of inputs, is the number of categories i, (0 ≤  ≤ 1) is the correlation coefficient of category i. The actual value of is determined by the expert evaluation method and will be modified according to the operation results during the daily operation of the platform. The smaller the granularity of task decomposition, the greater the dependency.

Definition 3. Coordination difficulty between services. It refers to the difference between the longest and shortest service time (i.e., the execution waiting time of dependent tasks), which can be calculated bywhere is the coordination difficulty of the i-th task, N is the number of layers of all subtasks of the i-th task, and represent the maximum execution time and minimum execution time of subtask in the specified level, respectively. Obviously, the larger the granularity of task decomposition, the more difficult it is to cooperate.

Definition 4. The granularity G of the candidate service set, which indicates the suitability of the granularity of the candidate service set to complete the specified task. According to the comprehensive analysis formulas (2) and (3), the higher the interdependence of the candidate service sets, the more unfavorable it is to complete the tasks in collaboration. That is, with the increase of R value, the G value should be appropriately increased (i.e., subtasks should be merged). However, the larger the decomposition granularity, the more waiting time for other parallel tasks, which is more unfavorable for the system to complete the task, that is, with the increase of T value, the value should be appropriately reduced (that is, the task should be decomposed), so it can be expressed byIt should be noted that when we calculate G using formula (4), the values of N, R, and T need to be regularized.

Definition 5. The state tree of the candidate service set, which is used to simplify the description of the task reorganization process. The node in the tree is the meta-task obtained after the preliminary decomposition of the task. The value of the node represents the internal competitiveness of the service candidate set corresponding to the node, and the weight of edges represents the granularity of the candidate set relative to the parent node.
With the purpose of increasing the internal competitiveness of the candidate service set, reducing the degree of dependency between candidate service sets, and reducing the waiting time of the parallel services in candidate service sets, we design the pruning algorithm as shown below to realize task reorganization (Algorithm 2).

Input: atomic subtasks and the corresponding candidate service sets
Output: subtasks with suitable granularity and the corresponding candidate service sets
(1)initialize k to 0, m to 0, nodeVal to −1
(2)construct the state tree of candidate resource set
(3)set m to the count of the original subtasks
(4)for epoch = 1, 2, …, m do
(5) set k to the count of the sub-tasks of the m-th original subtasks
(6)for epoch = 1, 2, …, k do
(7)  if sub-task is not a leaf node
(8)   get the value of nodeVal
(9)   if nodeVal is greater than 0
(10)    crop and recalculate the state tree
(11)   end if
(12)  end if
(13)end for
(14)end for

Step 6. Calculate the internal competition of each candidate service set, the dependencies between each candidate service set and others, and the coordination difficulty of each candidate service set, so as to construct the state tree of the candidate resource set using formula (4).

Step 7. Traverse all nodes in the m-th layer of the candidate service set state tree to determine whether the subnodes under the node need to be merged.(1)Set k = k + 1 and repeat steps (1)–(3) if the k-th node in the m-th layer is a leaf node; otherwise, go to step (2).(2)Get the value nodeVal of the k-th node in the m-th layer; if the value is less than zero, set k to k + 1 and return back to step (1); otherwise, go to step (3).(3)Crop all child nodes under the k-th node, recalculate the node values in the new state tree, set k = k + 1, and repeat steps (1)–(3).

Step 8. Set m = m + 1 and go back to Step 7.

5. Service Composition Model

5.1. Problem Description

For the original manufacturing task submitted by the customer, after task decomposition, the subtask list is obtained as shown in Table 1, where the “number of acceptable services” represents the number of service providers that can be accepted by the demander, In the specific implementation process, considering that some subtasks may be complex, multiple services are required to complete them.

Considering that there may be a large number of manufacturing tasks of the same category with the same or similar input and output in the cloud manufacturing software service platform, it is necessary to consider the composition quality of all tasks of the same category while building the optimal composition model of services. Let subtask belong to category . Therefore, the service candidate set matching task also can match all other tasks under category . Afterward, according to the given QoS evaluation model, we can calculate the competency of each service in the candidate set for each task under category , as shown in Table 2. The combination target is to find a service combination with the highest competency, which means the sum of competencies of each subtask of the manufacturing task T is the largest, and ensure all other tasks under each category have the highest competency at the same time.

It should be noted that the competency value in Table 2 needs to be calculated under a given QoS model according to the evaluation data of the specific evaluation system in the cloud manufacturing service platform. As each attribute of QoS has different measurement methods and dissimilar units, the aggregated QoS values for each attribute should be normalized before evaluating the global QoS of the cloud manufacturing service. Each attribute is either a positive or a negative factor (for a negative factor, the smaller the value of the index, the better for the service requesters and vice versa). This can be obtained using the following equations:

Formulas (5) and (6) are associated with the normalization of positive QoS indices (such as availability and reliability) and negative QoS attributes (such as time and cost), respectively. In this paper, for the convenience of expression, the aggregated QoS values are generated randomly between 0 and 1.

Collaboration is a typical feature of service composition,while as a typical model using to describe role-based collaborative system, E-CARGO [3641], which is proposed by Professor Zhu in 2006, is applied to the service composition in cloud manufacturing for the clearly description of the collaboration of service composition in this paper. In E-CARGO, a role-based system is described as nine-tuples. , where C is a set of classes, O is a set of objects, A is a set of agents who are representatives of human users, M is a set of messages, R is a set of roles, E is a set of environments, G is a set of groups, s0 is the initial state of a collaborative system, and H is a set of users.

5.2. The Proposed Service Composition Model

To use the E-CARGO model for service composition in cloud manufacture, we map the related concepts involved in service composition to the corresponding tuples, and the specific process is shown in Figure 3.

Here, we introduce some necessary parameters to simplify the description of the model.N is a nonnegative integerm(=|A|) is the number of agents, which is mapped to the number of services in one service candidate setn(=|R|) is the number of roles, which is mapped to the number of all tasks whose category is the same as the specified sub-task in the current total taskq(=|Q|) is the number of all sub-tasks of the current original task after decomposition

Next, we give the following definitions combined with the above parameters.Definition 1: in group G, <i, j> is used to indicate that role j is assigned to agent i. It means that the manufacturing task j is assigned to service i in the cloud manufacturing environment.Definition 2: in group G, L(j) ∈ ∈n (1 ≤ j ≤ n) expresses that the role j needs to be assigned at least L [j] agents. As for the cloud manufacturing environment, L(c, j) ∈ n (1 ≤ c ≤ q, 1 ≤ j ≤ n) expresses that subtask j of category c needs at least L(c, j) services to complete cooperatively.Definition 3: in group G,  ∈ m (1 ≤ i ≤ m) means that agent i can only be assigned to roles at most. As for the cloud manufacturing environment,  ∈ m (1 ≤ c ≤ q, 1 ≤ i ≤ m) means that service i in the c-th candidate service set can only serve tasks at the same time.Definition 4: in group G, the matrix Q[i, j] ∈ [0, 1] (1 ≤ i ≤ m, 1 ≤ j ≤ n) represents the competence degree of agent I for role j, where 0 is the lowest competence degree, and 1 is the highest. In the cloud manufacturing environment, we introduce the variable c representing the category to expand the matrix Q. The expanded matrix Q [c, i, j] ∈ [0, 1] (where 1 ≤ c ≤ q, 1 ≤ i≤ = m, 1 ≤ j ≤ n) describes the ability of service i in the c-th candidate service set to complete the j-th subtask of category c. The values of competence degree are usually calculated according to the corresponding QoS model, which generally assigns different weights to different QoS criteria (e.g., time, cost, etc.).Definition 5: in group G, the assignment matrix T [i, j] = {0, 1} (where 1 ≤ i ≤ m, 1 ≤ j ≤ n) indicates whether role j is assigned to agent i. T [i, j] = 1 means role j is assigned to agent i, and T [i, j] = 0 means not assigned. In the cloud manufacturing environment, T [c, i, j] ∈ [0, 1] (where 1 ≤ c ≤ q, 1 ≤ i ≤ m, 1 ≤ j ≤ n) is used to indicate whether the j-th task of category c is assigned to the i-th candidate service set.Definition 6: in group G, the assignment efficiency σ represents the total competency degree of all agents assigned roles, and it can be calculated by In the cloud manufacturing environment, σ can be calculated by Definition 7: in group G, role j is workable when there are enough agents that can compete for it, and there is In the cloud manufacturing environment, a manufacturing task j can be effectively assigned when all of its subtasks are effectively assigned, and formula (10) should be satisfied:Definition 8: in group G, agent i is workable when the number of roles assigned to agent i does not exceed its workload, and there isIn the cloud manufacturing environment, the assignment result for service i is effective if it satisfies the constraint condition inDefinition 9: in group G, the assignment matrix T is workable if each role and each agent is workable. If T is workable, then group G is workable. In the cloud manufacturing environment, if all subtasks decomposed from an original task can be assigned to enough services that meet the workload requirements, we can say there is effective service composition.

Finally, the process of role collaboration-based service composition is to find the optimal assignment scheme T, where A(|A| = m), R(|R| = n), Q(|Q| = q), L and La are given. Namely, we need to solve the maximum value of the target object σ shown as formula (13) under the specified constraints.subject to

5.3. Cplex-Based Solving Method

For obtaining higher execution efficiency, this paper bypasses the compilation process of the IBM ILOG CPLEX development environment and uses the method of directly referencing the ILOG development package in Java project to solve the above model. The specific steps are as follows:(1)Find the mapping relationship: It is necessary to map the relevant elements involved in the service combination model to the four basic elements (objective function, function variable, variable coefficient, constraint condition) of the linear programming problem in ILOG, where the objective function is σ, the variables of the objective function correspond to the assignment matrix T, the variable coefficients correspond to the quality of service (QoS), and the constraint conditions are related to L and La.(2)Add objective function: When using ILOG to solve linear programming problems, we need to transform the matrices Q and T into one-dimensional vectors and form the final objective function. Then the optimization target is added in the Java code by calling the following method of ILOG:IloIntVar[] X = cplex.intVarArray(q ∗ mn, 0, 1);cplex.addMaximize(cplex.scalProd(X, V));where,  = T[c, i, j],  = Q[c, i, j] (1 , 1 ).(3)Add constraints:First, declare the expression object of the constraints:

6. Experimental Analysis and Verification

Considering the types of open-source toolkits in different development languages and the characteristics of task decomposition algorithm and service composition model in this paper, Python language, and its mainstream scientific computing library are used to implement the task decomposition algorithm, and Java language and IBM ILOG CPLEX library are used to implement the service composition algorithm. The specific hardware and software experimental environment are shown in Table 3.

6.1. Case Analysis

The design and production process of military electric vehicles is extremely complex, and it is difficult to rely on a single service provider to complete the manufacturing task. Therefore, after receiving the task, the platform needs to decompose the task into executable and appropriate fine-grained subtasks and assign them to different service providers to complete the task cooperatively. After an in-depth understanding of the design and production process of military electric vehicles, we combine them in detail to form a more specific process as shown in Table 4, and a logical relationship between the processes is shown in Figure 4, where Serial Number 0 represents the completed vehicle [39].

The implementation of the task decomposition method proposed in this paper relies on the existing services. Therefore, the first step is to collect sufficient service data sets. We crawl some related service sets from the network and expand them through reproduce, mirroring, local adjustment and other methods according to the characteristics of the crawled network data in a reasonable range, where the service times of each service are randomly generated in a specified range according to the complexity of the service, and meantime, the dependency degree of the specified input type is set to a fixed value initially, for simplifying the difficulty of the solution, we only consider two input types: logistics and communication.

For verifying the effectiveness of the task preliminary decompose algorithm, we take the design and production of the drive motor system as input and obtain the result shown in Figure 5, where the values of nodes is the corresponding serial number in Table 5. Next, we reorganize the result using the reorganization algorithm proposed in this paper, but the result has not changed.

Then, the design and production of the electric drive system is taken as input for the proposed algorithm. The result of the preliminary decomposition shown in Figure 6(a) and that of the reorganization shown in 6(b) are obtained, respectively.

Without any expert system in the industry, this proposed decomposition method can decompose complex manufacturing tasks into subtasks with enough candidate service sets according to the characteristics of the existing service in the platform, which provides the necessary premise for the realization of collaborative manufacturing.

After the subtasks and the corresponding candidate service sets are obtained by task decomposition, an optimal assignment scheme is needed to lay the foundation for the subsequent task scheduling. To simplify the description process, this paper takes the task described as the design and production of the electric drive system as an example to verify the effectiveness of the proposed service composition model.

Known from Figure 6, the task named design and production of electric drive system can be decomposed into two first-level subtasks, design and production of drive motor system and design and production of power system, which can come from task decomposition or direct release by other users. Assuming that the number of them is n1 and n2, respectively, and that of the corresponding candidate service sets are m1 and m2, respectively. At the same time, the related data involved in this experiment are initialized as the following: n1, n2, m1, and m2 are random numbers between 3 and 5; the number of services can be accepted by a single task L and the workload of single service La are all random numbers between 1 and 3; the service quality of each service in the candidate set (in real application scenarios, it is calculated based on its historical evaluation data) is a random number between 0 and 1. Finally, the initial situation is shown in Table 6.

According to the initialization data of the above model shown in Table 6, we use the Cplex-based method to solve the model and get the assignment scheme shown in Table 5, which takes 204.02 ms and the sum of the QoS in this scheme is 7.13. It should be noted that the above randomly generated data may not be able to find the assignment scheme, which is a common scenario in practical applications, and the specific processing measures are not within the scope of this paper.

6.2. Performance Analysis

On the one hand, the current research on task decomposition is relatively few, and there is no unified evaluation standard. On the other hand, considering that the main advantage of the proposed decomposition strategy in this paper is to reduce the dependence on industry expert system and solve the problem of disconnection between task decomposition and service composition process, which has been reflected in the specific implementation process. Therefore, we take no in-depth comparative analysis of that.

The proposed service composition model can smoothly transition from the task decomposition procedure to make full use of the existing service state in the cloud pool, and it can be easily introduced with a variety of variable factors to expand and optimize itself. For example, all the same, or similar independent tasks want to be assigned the best services. Therefore, the demander's acceptance matrix M can be introduced to expand the model (13) to obtain the optimization model (14); all interdependent services need a solid foundation for cooperation with each other. Therefore, the provider's acceptance matrix N can be introduced to expand the model (14) to (15); in addition, with the rapid development of social networks, the influence of peer effect in the evaluation system of the platform becomes increasingly important, which can directly or indirectly affect the choice of users themselves or other users. Therefore, the user’s social relationship S can be introduced to optimize the model and obtain the optimization model (17). However, considering the limited space of this paper, the modeling of M, N, and S is not described in detail. In the next experimental verification, we only take the proposed service composition model as the basic model and verify its effectiveness and adaptability by comparing it with the improved Hungarian algorithm [37, 42] (we call it KMB in the following description):

Firstly, the validity is verified, and the experimental parameters are set as follows: q, n, m, L[j], and La[i] are all random integers, where, for improving the success rate of assignment, the values of m are usually set to an integer multiple of values of n (the number of services is generally required to be more than the number of tasks), La[i] is set between 1 and 3 and L[j] is set between 1 and 2m/n. The experimental results are shown in Table 7. It can be seen that the proposed method and the KBM algorithm can get comparable results at most times; in addition, KMB algorithm is more efficient when the number of tasks n is less than 20. However, when the number of tasks n is greater than 20, the KMB algorithm’s efficiency is significantly lower than that of the proposed method. Considering that in the real business environment, the number of subtasks and their corresponding service candidate sets is usually much more than 20, so the proposed composition model and the corresponding solution algorithm can better meet the engineering requirements.

Secondly, to verify the overall performance of the proposed composition model and the corresponding solution algorithm, we use different scales of random integers for experimental analysis, where q is a fixed value of 2, n increases from 10 to 300 with the pace of 10 each time, m is set to 1 ∗ n, 2 ∗ n, 3 ∗ n, 4 ∗ n in turn, L [j] and La [i] are random numbers from 1 to m/n. At the same time, for avoiding the randomness of the experimental results, each data pair(q, m, n) is randomly tested for 100 times, and the average value is collected as the final experimental data, and the final performance trend chart as shown in Figure 7 is formed.

It can be seen that the KMB algorithm’s execution efficiency decreases sharply with the increase of La[i], and its complexity is . Compared with the KMB algorithm, the execution efficiency of the proposed method is more stable, and when La [i] is greater than or equal to 2, it is far better than the KMB algorithm; at the same time, we notice that both KMB algorithm and our solution method have some fluctuations in the execution process. When m = n, more exceptions exist in the proposed composition model and the corresponding solution method than that of the KBM algorithm, which is caused by the unreachable assignment scheme under the existing conditions (the number of services and tasks are the same, but some tasks need multiple services). And when m > n, the exception of our method does not exist. However, the KMB algorithm still has some anomalies in some cases. In most cases, the above two solutions can meet the practical needs, but compared with the KMB algorithm, our composition model and solution method is obviously better than KMB algorithm in both efficiency and stability.

7. Conclusions and Future Work

Optimal manufacturing resource allocation is a core problem in the cloud manufacturing mode. In the specific implementation process, we should solve the problems of resource virtualization, task decomposition, task service matching, service composition, and scheduling in turn. This paper mainly proposes the corresponding solutions for task decomposition and service composition model. As for task decomposition, we refine the description model of task and service and propose the preliminary decomposition scheme based on task service matching and the reorganization strategy based on the characteristics of the service candidate set. As a result, the problem of discontinuity between task decomposition and service composition is solved. For solving the problem of task and service assignment, we design the service composition model using E-CARGO and solve the model using Cplex, which not only greatly reduces the problem of falling into local optimum of heuristic algorithms but also provides the necessary foundation for the introduction of more variable factors(such as cooperation, conflict and other constraints). Finally, the practicability of decomposition strategy and service composition model is proved by the experimental analysis.

The future work may follow several aspects:(1)Task scheduling of manufacturing resource optimized allocation. Compared with the management of independent resources of enterprises, the management of shared resources is more dynamic (e.g., devices can join or withdraw from sharing services at any time). Hence, the difficulty in task scheduling will be greatly increased.(2)Personalized recommendation applied in manufacturing resource optimized allocation. To improve the user-friendliness and convenience of online platforms, the personalized service recommendation for different customer requirements is an effective means. However, since manufacturing services usually appear in the form of composite services, existing Web service-based personalized recommendation technologies are difficult to be applied effectively.(3)Real-time response of large-scale cases in a dynamic environment should be investigated to prove the superiority of Blockchain technology-based method over the centralized optimization methods. And more comparisons with other methods (e.g., PSO, game theory) should be made [43].

Data Availability

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work was supported by the Key Research and Development Program of Anhui Province of China (T04a05020094, 904a05020091, 04a05020091, 04a05020092, and 04a05020093), the Scientific Research Project of Chaohu University (XLZ-202107, XLZ-201807), the Teaching and Research Project of Chaohu University (ch18jxyj18), and the Applied Curriculum Project of Chaohu University (ch18yygc13).