Abstract

Cloud manufacturing (CMfg) is a new service-oriented smart manufacturing paradigm, and it provides a new product development model in which users are enabled to configure, select, and utilize customized manufacturing service on-demand. Because of the massive manufacturing resources, various users with individualized demands, heterogeneous manufacturing system or platform, and different data type or file type, CMfg is fully recognized as a kind of complex manufacturing system in complex environment and has received considerable attention in recent years. In practical scenarios of CMfg, the amount of manufacturing task may be very large, and there are always quite a lot of candidate manufacturing services in cloud pool for corresponding subtasks. These candidate services will be selected and composed together to complete a complex manufacturing task. Obviously, manufacturing service composition plays a very important role in CMfg lifecycle and thus enables complex manufacturing system to be stable, safe, reliable, and efficient and effective. In this paper, a new manufacturing service composition scheme named as Multi-Batch Subtasks Parallel-Hybrid Execution Cloud Service Composition for Cloud Manufacturing (MBSPHE-CSCCM) is proposed, and such composition is one of the most difficult combination optimization problems with NP-hard complexity. To address the problem, a novel optimization method named as Improved Hybrid Differential Evolution and Teaching Based Optimization (IHDETBO) is proposed and introduced in detail. The results obtained by simulation experiments and case study validate the effectiveness and feasibility of the proposed algorithm.

1. Introduction

The continuing rise in customer expectation, demand of environmental-friendly production, rapid responsiveness to market changes, and other market competitions poses a critical challenge to manufacturing industry. Under these market pressures, it is very important that manufacturing partners in the industry chain should work together to provide offerings, which contain material products and immaterial services or functionalities, so as to achieve a win-win situation among users, enterprises, environment, and society. So, cloud manufacturing (CMfg) emerges as the time require. The term “cloud manufacturing” was firstly coined by Li et al. ‎[1]. CMfg is a special networked manufacturing mode which is different from traditional networked manufacturing mode such as ASP and MGrid. Traditional networked manufacturing is an independent and static system and lacks dynamics, intelligent client, and effective business model. Its shared resources are mostly software and other soft resource, less involving hardware resources, and the overall manufacturing capacity of enterprises. The concept of traditional networked manufacturing emphasizes the centralized use of decentralized resources. As a new manufacturing paradigm, CMfg is a kind of large-scale networked distributed manufacturing mode that simultaneously provides multiple users with customized manufacturing services on a CMfg system or platform by organizing online manufacturing resources which are always virtualized and encapsulated as manufacturing services. Both the concepts of “integration of distributed resources” and “distribution of integrated resources” are reflected in CMfg. So, the advantage of CMfg is that the philosophy of “software as a service” is expanded to “manufacturing as a service”, which enables the cloud system to virtualize and servitize the manufacturing resources and capabilities including not only soft resource but also hard resource. In CMfg, there are massive manufacturing resources, various users with individualized demands, heterogeneous manufacturing system or platform, and different data type or file type, so the CMfg is obviously a complex system running in a complex environment. It is a strong demand that the system reaches stability, safety, reliability, and efficiency and effectiveness, and there are many theoretical and technological challenges for the CMfg in practical application.

In the past decades, through the integration of some advanced technology such as Cyber-Physical System (CPS) ‎[2], Artificial Intelligence (AI) ‎[3], Internet of Things (IoT) ‎[4], Big Data (BD) ‎[5], supply chain management ‎[6], cloud computing ‎[7], and so on, CMfg has been improved rapidly and gradually in focus. Now, many research institutions are focusing on CMfg, and many theoretical researches on the definition, architecture, resource modeling, QoS evaluation, and resource scheduling of CMfg have been reported. More importantly, thanks to these aforementioned technologies, more and more national governments are also focusing on CMfg and promoting its greater development. Taking “Industrie 4.0” of the German government as an example, any production system can seamlessly access to the cloud platform to provide remote maintenance or to find personalized custom solutions.

Since the manufacturing services in CMfg are massive, users can acquire a lot of production capability to fulfill their customized demand. Especially for a complex manufacturing task, it is always divided into several subtasks, and for every subtask there are also many manufacturing services that can be selected. Although these candidate services have different performance or QoS, they all satisfy or are within the range of user’s requirements. So, such collaboration is essentially enabled by the composition of manufacturing services, i.e., cloud service composition for cloud manufacturing (CSCCM). CSCCM involves selecting appropriate and optimal manufacturing services from candidate services and assembling them together with logistic. In other words, CSCCM is essentially an interconnected set of multiple specialized manufacturing services to offer the products or functionalities to solve a complex manufacturing task according to user’s requirement. Obviously, such composition is one of the most difficult combination optimization problems with NP-hard complexity. Tao et al. ‎[8] divide existing service composition method into five categories: business flow-based service composition, AI-based service composition, graph-based service composition, agent-based service composition, and QoS-aware service composition. Now, many researches focus on the QoS-aware service composition, because the QoS such as time, cost, reliability, and reputation influent user satisfaction deeply. Besides, many classical swarm intelligent and heuristic algorithms which have been widely studied in traditional manufacturing system are also introduced into CMfg, such as genetic algorithm (GA) ‎[9], simulated annealing (SA) ‎[10], swarm optimization (PSO) ‎[11], ant colony optimization (ACO) ‎[12], artificial bee colony (ABC) ‎[13], chaos optimization (CO) ‎[8], differential evolution algorithm (DE) ‎[14], and teaching-learning based optimization (TLBO) ‎[15, 16].

Although the aforementioned algorithms and related improved versions can solve NP-hard problem to a certain degree, they cannot address well the mass task in CMfg. In order to achieve a better optimization, adopted are several candidate manufacturing services, when the task amount is very large and there are quite a lot of candidate manufacturing services for corresponding subtasks. Moreover, when the interface is highly standardized, it is possible that multi-services execute parallel and hybrid. The encoding methods and operations of the aforementioned algorithms are just on single gene location, which is not suitable for the scenario of the multi-services selected for one subtask, because to the multi-services, the transportation scheme between subtasks is diversity and should be under consideration when establish the objective function. In addition, the global search ability of the aforementioned algorithms is not ideal for high dimensional and complex objective function in CMfg. So, in this paper, we propose a manufacturing service composition scheme named as Multi-Batch Subtasks Parallel-Hybrid Execution Cloud Service Composition for Cloud Manufacturing (MBSPHE-CSCCM). Meanwhile, a novel optimization method named as Improved Hybrid Differential Evolution and Teaching Based Optimization (IHDETBO) is proposed by adopting and improving the classical DE and TLBO algorithms.

The remainder of the paper is organized as follows. In Section 2, our previously proposed CMfg architecture is presented, and the problem statement is discussed. In Section 3, the basic concepts and operations of DE and TLBO algorithms are both summarized. In Section 4, the proposed algorithm IHDETBO is introduced in detail. In Section 5, the simulation experiments and case study are given, and the experimental results are discussed. In Section 6, the paper is summarized with concluding remarks and future work.

2. Problem Statement

In our previous work, we have discussed the CMfg architecture ‎[1719]. As shown in Figure 1, there are three kinds of roles in CMfg: the resource demander (RD), who is the demander of products, manufacturing resource, or service, for example, the users or some public institutions; the resource provider (RP), who is the provider of products, manufacturing resource, or service, for example, the enterprise or service cooperator; the manager, who designs, develops, and maintains the CMfg equipments and related software like some professional middleware. From the perspective of CMfg system, it is composed of a cloud manufacturing platform (CMP) and cloud end (CE). The latter contains a cloud demander (CD) and a cloud provider (CP) both of which are corresponding to RD and RP, respectively. The CPs publish, update, cancel, and provide manufacturing resource and service through CMP; the CDs, in turn, submit the requirement to and obtain the corresponding products or services from CMP, the functionality of which is like a great resource pool consisting of several sub-CMPs that can interact with each other. In addition, through cutting-edge technologies such as Internet of Things (IoT), Big Data (BD), Cyber-Physical System (CPS), and so on, manufacturing resources in CP are encapsulated as cloud service (CS), which then can be accessed and obtained by CD from CMP. Thus, CMfg can provide users with manufacturing resources highly virtualized as services in the manufacturing life cycle ‎[20].

Generally speaking, the lifecycle of CMfg has several phases just like cloud computing: the definition and publication of CS, the proposal of manufacturing task requirement, the matching of CS, the composition and provision of CS, the determination of manufacturing contract, manufacturing and distribution, and the disposal of manufacturing task ‎[22]. The work proposed in this paper will address the phase of the composition and provision of CS. As shown in Figure 2, after users have submitted manufacturing task to CMP, the CMP first decomposes the task to several subtasks according to the knowledge of domain experts and searches proper CSs to the corresponding subtasks from cloud service pool (CSP) and then selects and composes these CSs with optimization algorithms to satisfy users’ QoS requirements such as time, cost, availability, reputation, and so on. Because the total amount of tasks may be very large, as well as there are many candidate CSs according to the performance requirement of corresponding subtasks, several CSs would like to be chosen to complete the same subtask parallel, and each of them would deliver its production results to one or more CSs for next subtask. Taking the first and second subtasks as an example (shown as SubT1 and SubT2 in Figure 2), they have two corresponding candidate CS sets CSS1= and CSS2=, and total amounts of these two subtasks are both T. Some of these CSs will be allocated production task and the amount is Ti,j, where . So, some CSs in CSS1 will be allocated production task and will deliver their production results to one or more CSs in CSS2 after completing their own tasks. For example, in composition scenario shown in Figure 2, for the first subtask, three candidate CSs are chosen, i.e., CS1,1, CS1,2, and . After completing their own tasks, the production results of CS1,1 will be delivered to CS2,1 and , similarly, CS1,2 to CS2,1 and CS2,2, and to and . Thus, a mass task can be transformed into multi-batch subtasks which will be parallel-hybrid executed. Obviously, the key issue is CS composition with respect to users’ QoS requirement, which is a combinatorial optimization problem with strong NP-hard problem in essence.

3. Basic Concepts and Operations of DE Algorithm and TLBO Algorithm

MBSPHE-CSCCM is a combinatorial optimization problem with strong NP-hard problem. Furthermore, with the rapid increasing number of candidate CSs and the amount of cloud manufacturing tasks and subtasks, the search space is quite large. It is hard to solve MBSPHE-CSCCM problem with a large search scale by traditional methods. In this paper, the DE algorithm and TLBO algorithm are adopted to design a new method termed as IHDETBO. DE and TLBO are both recently developed meta-heuristic algorithms to enhance and balance the exploration and exploitation capacities. The basic concepts and operations of these two algorithms are detailed in the following sections.

3.1. DE Algorithm

DE was firstly coined by Storn and Price ‎[14]. It is a powerful evolutionary algorithm based on stochastic search technique, which is an efficient and effective global optimizer in the continuous search domain ‎[23]. Similar to other evolutionary algorithms such as ABC, DE firstly generates a population consisting of NP n-dimensional initial vector randomly, which is so-called individuals, i.e., , generated by (1). Here, NP is one of the parameters of DE and indicates population size: where ; is the boundary of the jth variable; and is a random number uniformly distributed over the interval .

Then the population evolves over generations through three types of operations such as mutation, crossover, and selection till one of the termination criterions is satisfied.

3.1.1. Mutation Operation

In the mutation operation of the kth generation, DE produces a mutant vector . Seven most frequently referred mutation operations are listed as follows:(1)rand/1/bin ‎[14]:(2)rand/2/bin ‎[23]:(3)best-to-rand/1/bin ‎[14]:(4)best-to-rand/2/bin ‎[24]:(5)rand-to-best/2/bin ‎[25]:(6)current-to-rand/1/bin ‎[26]:(7)current-to-pbest/1/bin ‎[24]:where d1, d2, d3, d4, and d5 are distinct random integers uniformly generated from the set ; represents the best individuals in the kth generation; represents the individuals randomly selected from the top 100p best individuals in the kth generation, and ; is a control parameter randomly chosen with the range ; is a real fixed number () named as scale parameter (step size) to control the amplification of the difference.

3.1.2. Crossover Operation

In the crossover operation after mutation operation of the kth generation, the trial/offspring vector is generated by mixing with each pair of the target vector and its corresponding mutant vector . In the basic version, the binomial crossover operation is defined as follows:where is a uniform random number on the interval ; CR is crossover constant determined by users within the interval ; is a randomly chosen index which ensures that the trial vector gets at least one element different from .

3.1.3. Selection Operation

A greedy selection operation is used to select individual from each pair of the target vector and the corresponding trial/offspring vector into the next in the next (k+1)th generation by comparing their fitness. For minimization problems, whose fitness is smaller, they will be chosen, and, for maximization problems, the opposite is true.

3.2. TLBO Algorithm

TLBO algorithm was firstly proposed by Rao ‎[15, 16]. Based on the effect of the influence of a teacher on the output of learners in a class, teaching-learning is an important motivated progress where any individual tries to learn something from the other and simulates the traditional teaching-learning phenomenon of class room ‎[27]. So, teaching-learning algorithm has two fundamental modes of learning: (i) learning from the teacher (known as teacher phase) and (ii) interacting with the other learners (known as learner phase). The initialization is as the same as DE by using (1).

3.2.1. Teacher Phase

In the teacher phase, the algorithm simulates the learning of students from teacher. The teacher (always the best individual of the entire population) will put maximum effort to increase the mean grade of the class from any value to his value, and the learners will gain knowledge according to the quality of teaching delivered by a teacher and the quality of learners present in the class ‎[27]. At any teaching-learning cycle k, let be the teacher and be the mean whose every element is the mean value of the corresponding dimension, i.e., , , where j and n have the same meaning with DE. Teacher will try to improve other individuals by shifting their positions towards the position of his own level, where . The difference between the result of the teacher and the mean result of the learner for the learner d is calculated as follows:where rd is a random number in the range ; TF is the teaching factor which decides the value of the mean to be changed and can be either 1 or 2 and decided randomly as follows:

Based on the , individual in the current population is updated according to the following:where is the updated value and will be accepted if it gives a better fitness value.

3.2.2. Learner Phase

In the learner phase after teacher phase, the algorithm will simulate the learning of the students (individuals) through interacting with each other by discussions, presentations, formal communications, and so on. A learner will develop his knowledge if the other learners are better. So in this phase of the kth generation, a learner randomly selects another learner , if the fitness value of is better than that of , then will be updated by (13), otherwise by (14).where is a random number within the range , and will be accepted if it gives a better fitness value.

4. IHDETBO for Cloud Manufacturing

To solve the MBSPHE-CSCCM problem, a so-called IHDETBO algorithm is proposed by integrating improved DE (named as IDE phase) and the improved teacher phase of TLBO (named as IT phase) to enhance and balance the exploration and exploitation capacities. At first, block encoding and initialization are operated in population initialization. In the IDE phase, block mutation, block crossover and block selection are operated. Besides, factors F and CR will be both improved and calculated with adaptive strategy to enhance the population diversity and generate better individuals into the next phase ‎[28]. In the learner phase of canonical TLBO, due to the fact that the learner may select an inappropriate learner (a poor student) to learn, the convergence speed will slow down and the effect of local search will be reduced. On the contrary, every learner will be improved to the teacher’s level in the teacher phase of canonical TLBO, and the algorithm puts up a better convergence performance. So, operations in teacher phase will be adopted and improved as IT phase, and factor TF will be also improved to make the simulation more in line with the actual. The algorithm is discussed as follows in detail.

4.1. Parameter Settings

To facilitate the discussion of MBSPHE-CSCCM problem, the parameter setting is unified as follows.

Task: cloud manufacturing task.

K: the total amount of cloud manufacturing task.

: the set of subtasks decomposed by CMP, where i is a natural number in the range ; n is the total amount of subtasks. So, let the amount of every subtask be Ki. Generally, and must both hold, where is a positive integer.

: the set of candidate CSs for corresponding subtask , where , is the amount of the candidate CSs.

4.2. Block Encoding and Initialization

To address the MBSPHE-CSCCM problem, a new encoding method named as block encoding is proposed. A MBSPHE-CSCCM scheme can be encoded as a chromosome by the integer array with the length equal to the number of CSs ‎[21], and the integers are all in the range . By adopting calculation method of subtask height, a genebit partition method based on subtask rank is proposed; thus every genebit in the integer array is one-to-one correspondence with the CS ‎[29]. Shown as Figure 3, the rank of the corresponding subtask is the location in the cloud manufacturing chain, and the amount of genebit in every rank is the amount of candidate CS for the corresponding subtask. So, the genebits in the same rank can be seen as a block of the chromosome. The genebit index of is calculated as follows:

The initialization is generated randomly and the sum of every rank equals to the amount of the corresponding subtask, so the initialization of every genebit is calculated as follows:

4.3. IDE Phase

In the IDE phase, there are still three operations: mutation, crossover, and selection mentioned above. Based on the block encoding and initialization, mutation and crossover are both executed rank by rank, which are named as block mutation and block crossover, respectively. Additionally, parameters F and CR will be both improved in the block mutation operation and the block crossover operation, respectively.

4.3.1. Improved Block Mutation Operation

Let be rank i of the hth generation, where is the integer array in the genebit of rank i of individual d. After this operation, a block mutant vector is . To enhance the exploration capacity and population diversity, we employ “rand/1/bin” mutation operation like (17), which demonstrates slow convergence speed but has strong exploration capability ‎[14]. Obviously, the sum of every element of equals that of .

In the IHDETBO algorithm, the mutation factor Fd is improved and calculated adaptively with the aim of generating diversified individuals. At each generation h, Fd of each individual is independently generated like (18), according to a Cauchy distribution with location parameter and scale parameter 0.1, and then truncated to be 1 if or regenerated if .

Let SF be the set of all successful mutation factors in this generation. The location parameter of the Cauchy distribution is initialized to be 0.5 and then updated at the end of this iteration as follows:where c is a positive constant between 0 and 1, and is the Lehmer mean and calculated as equation (20).

According to the existing research results ‎[24], a truncated Cauchy distribution is introduced to generate adaptive Fd, because it is more helpful to diversify Fd and thus avoid premature convergence which often occurs in greedy mutation strategies if Fd is highly concentrated around a certain value, so as to enhance the population diversity. Additionally, the Lehmer mean of SF makes the adaptation of place more weight on larger successful mutation factors and is also helpful to propagate larger so as to avoid premature convergence at the end.

4.3.2. Improved Block Crossover Operation

Let be the block trial/offspring vector, and the block crossover operation is executed rank by rank. According to (9), the block crossover operation is calculated as follows. To make the sum of every element of remain unchanged, fine-turning operation will be needed; i.e., every element of will increase or decrease proportionally.

In the IHDETBO algorithm, the crossover probability CRd is also improved and calculated adaptively with the aim of generating better individuals into the next IT phase. At each generation h, CRd of each individual is independently generated as (22), according to a normal distribution of mean and standard deviation 0.1, and then truncated to .

Let SCR be the set of all successful crossover probabilities in this generation. The mean of the normal distribution is initialized to 0.5 and then updated at the end of this iteration as follows:where c is a positive constant between 0 and 1, and is the arithmetic mean of SCR.

According to the existing research results ‎[24], better control parameter values tend to generate better individuals that are more likely to survive and thus these values should be propagated to the following generations. So, the set SCR records recent successful crossover probabilities, and with a small standard deviation in (22) leads to generate a new CRd that has a great probability close to the successful crossover probability values.

4.3.3. Selection Operation

After block mutation and block crossover, we can obtain a trial/offspring individual , and abovementioned greedy selection operation is also adopted to choose or into next phase.

4.4. IT Phase

In IHDETBO algorithm, the teacher phase of TLBO is improved and adopted as IT phase following the IDE phase. Based on the block operations illustrated in Section 4.3, the operation in this phase is also executed block by block. Additionally, factor TF will be improved to make the simulation more in line with the actual.

4.4.1. Block Teaching Operation

Just as introduced in Section 3.2.1, the teacher (the best individual of this population) tries to disseminate knowledge among learners (other individuals of this population), which will in turn increase the knowledge level of the whole class (population). So, after IDE phase of the hth generation, let the individual which has the best fitness value be the teacher and be the mean individual, where i and n have the same meaning with that in Section 4.2; and . The difference of block i between the result of the teacher and the mean result of the learner for the learner d is calculated as follows:where rd is a random number in the range ; TF is the teaching factor which can be calculated by (11). Then, block i of individual d will be updated, and the updated individual will be accepted if it gives a better fitness value.

4.4.2. Improvement of TF

In the canonical TLBO algorithm, the teaching factor TF,d is either 1 or 2. It means that the learners learn nothing from the teacher or learn all the things from the teacher, respectively. Obviously, it is not in line with the actual. In actually teaching-learning phenomenon, the learners may learn in any proportion from the teacher because of the learners’ learning ability, the teacher’s teaching ability, or other reasons, so the teaching factor TF,d is not always at its end state for learners but varies in-between also ‎[27, 30]. Therefore, TF,d is calculated as follows:where and are the fitness values of and , respectively.

5. Experiments and Discussions

In this section, the effectiveness of the proposed IHDETBO algorithm is examined by several benchmark functions, and the process of IHDETBO applied to MBSPHE-CSCCM is demonstrated by a case study. These experiments are implemented in a PC with an Intel® Core™ i5-3337U CPU operating at 1.80GHz and 8.00GB of RAM, and operating system is Windows 7 (64 bit). The programming software for the experiments with benchmark functions and that for the case study are Matlab R2016a and Microsoft Visual C++ 6.0, respectively.

5.1. Experiments with Benchmark Functions

To investigate the performance of the proposed IHDETBO algorithm, six different benchmark functions with different characteristics of objective functions and different dimensions and search space are adopted. The results obtained by using the IHDETBO algorithm are also compared with other optimization algorithms such as PSO, DE, and TLBO algorithms with different dimensions and population sizes.

5.1.1. Benchmark Functions

To analyze and compare the performance and accuracy of the IHDETBO algorithm, we adopt six different benchmark functions shown as in Table 1. These functions have different characteristics such as unimodality (U), multimodality (M), separable (S), and non-separable (N), which have been shown in column C of Table 1, and the optimum is all 0 which is the minimum. For a unimodal function, the local minimum is also the global minimum, conversely, there are several global minimums for multimodal functions. So, it is more difficult to find the global minimum for multimodal function than that for unimodal function, because the former requires better global search ability. Additionally, the variables are affected by other variables in non-separable functions, but not for separable functions, so it is more difficult to find an optimum for the non-separable function than that for separable function. Therefore, the abilities of exploration, exploitation, and finding an optimum can be assessed by these functions, which can be seen as the abstraction for some engineering problem in practice.

5.1.2. Parameters Settings

We test proposed IHDETBO algorithm with the same parameters setting and compare the test results with PSO and DE algorithms. For PSO algorithm, parameter v is half of the search space. As for the DE algorithm, scale factor F and crossover probability CR are 0.5 and 0.9, respectively ‎[31]. The maximum iteration is 500 shown in column M of Table 2. The population size is an important parameter for heuristic algorithms, and it is set to 10, 20, and 50 shown in column N of Table 2. In addition, the dimensionality of the search space is another important issue for the methods, and it is set to 2, 5, and 10 for each population size shown in column D of Table 2. All experiments are operated 50 times independently.

5.1.3. Discussion on the Experimental Results

Every experiment is operated 50 times independently. The results of comparison test on 6 different benchmark functions are shown as Table 2. The column best, worst, mean, std, and FE are the best value, worst value, mean value, standard deviation, and mean value of function evaluations for the 50 times operations, respectively. The mean value is the most important index that will validate the algorithm performance well. Additionally, for better analysis through the benchmark function, the objective function landscapes of them for D=2 are shown as Figure 4, through which we can view the shape and obtain the corresponding characteristics.

According to the landscape shown in Figure 4, their characteristics will be analyzed theoretically. The first benchmark function f1 is a unimodal separable function. It is a very simple benchmark function, which is the easiest to find the global minimum among the six functions. It requires nearly no global search capability and pays special attention on local convergence speed of an algorithm. As shown in Table 2, the performance rank is TLBO>IHDETBO>DE>PSO. The second one f2 is a unimodal non-separable function. Although it is unimodality, its shape is spiral, so it is prone to oscillation that makes it difficult to identify the search direction. Due to the feature, it is difficult to find global minimum, and this benchmark function is always used to evaluate the global search capability of an algorithm. As shown in Table 2, the performance rank is IHDETBO>TLBO>DE>PSO. The third one f3 is a multimodal non-separable function. Although it is multimodality and there are many local minimums, most of them exist in a long and narrow place around the global minimum. Thanks to this feature, these local minimums have no deceptive. So, it is also very easy to find the global minimum, and the global search capability of an algorithm has little effect on the optimization result. As shown in Table 2, the performance rank is TLBO>IHDETBO>PSO>DE. The fourth one f4 is a multimodal separable function and has a very strong deception. Around the global minimum, there are many local minimums, whose gradients are very similar to that of global minimum, and the optimization algorithm may be mistaken for finding the global minimum. So, it can well evaluate the diversity of population and the global search capability of an algorithm. As shown in Table 2, the performance rank is IHDETBO>DE>TLBO>PSO, and IHDETBO performs better much more. The fifth one f5 is a typical non-linear multimodal non-separable function and has a wide search space. The variables in every dimension are closely related to and interact with each other, and there are a lot of local minimums. So, it is usually considered as a complex multimodal problem which is difficult to deal with by optimization algorithm. As shown in Table 2, the performance rank is IHDETBO>TLBO>PSO>DE. Similar to f5, the sixth one f6 is a typical non-linear multimodal non-separable function, too. In the D-dimensional search space, there are about 10D local minimums, and the shapes of these irregular peaks are uneven and jump up and down. So, the effect of traditional gradient-based algorithm is often not ideal, and it is also difficult to find global minimum. As shown in Table 2, the performance rank is IHDETBO>TLBO>PSO>DE, and IHDETBO performs better much more.

In summary, the classical DE algorithm has very strong global search capability, but its convergence speed is slow. As to classical TLBO algorithm, every individual tries its best to approach the teacher individual in teacher phase, and then in learner phase positive learning and reverse learning are carried out when excellent partner and poor partner are selected, respectively. So, the local search capability is very strong and the convergence speed is very high, which we can obtain through f1 and f3, but the performance is mediocre for complex deceptive benchmark functions such as f2, f4, f5, and f6. The algorithm proposed in this paper is divided into two phases, i.e., IDE and IT. In the IDE phase, the mutation factor Fd and crossover probability CRd are both improved. Especially, the mutation factor Fd that generated with a Cauchy distribution is very important to keep the diversity of population and improve the global search capability. The teacher phase of classical TLBO algorithm has very strong local search capability and leads to a very high local convergence speed; the improvement in the IT phase not only enhances the local search capability, but also avoids losing the possibility of finding better solutions due to overreliance on the teacher individual, so as to better balance the exploration and exploitation capacities. According to the experimental results, the algorithm proposed in this paper performs better to the high-dimensional non-linear multimodal benchmark function, which is always considered as mathematical model of complex engineering problems such as cloud service composition for CMfg.

5.2. Case Study

CMfg is a complex manufacturing system. Taking the car manufacturing as an example, automobile industry is a large and complex manufacturing system involving more than 200 industry fields such as design, material, electronic equipment, and so on. For every automaker, nearly 70% spare parts are outsourced. In this paper, we take the tire manufacturing as a case study, and it refers to raw material production, tire production, hub production, wheel assembly, vehicle assembly, and auto dealer. So, we can decompose the task into 5 subtasks: raw material production is subtask 1, tire production and hub production is subtasks 2.1 and 2.2 which can be executed parallel and regarded as subtask 2, wheel assembly is subtask 3, vehicle assembly is subtask 4, and auto dealer is subtask 5.

5.2.1. Case Data

The case data is from ‎[21]. Assume that a user needs 1000 cars and submits the requirements to CMP, then the CMP searches several candidate CSs for each subtask shown as Table 3. The amount of each subtask is 4000, 4000, 4000, 1000, and 1000, respectively, and the time consumption for each spare part (hour) is shown in column t of Table 3.

5.2.2. Objective Function

Objective function is the goal of MBSPHE-CSCCM. Normally, it is significant to optimize the QoS according to customer’s preferences. For the sake of discussion, we take the production time as the optimization objective. So, the objective function is defined as follows:where , and Ti is the production time of the ith batch subtask. Aiming at minimizing time consumption, the production results of each CS for subtask i will be delivered to the CS whose production plan is started the earliest and related production time is the largest among the candidate CSs for the following subtask (i+1). So, we firstly establish the time matrix as follows: where every element of contains three variables: the first one is the start time which is initialized to 0; the second one is subtask amount of obtained from individual of IHDETBO; and the third one is the end time of subtasks for each CS which is initialized to the calculation by , where , , and is the corresponding time consumption for each spare part obtained from Table 3. Then, the transportation scheme between subtasks is designed as Algorithm 1.

for  n=2:N
while  
  Examine row (n-1), choose in case of and is the smallest;
  Examine row n, choose in case of , is the smallest and is the largest;
  if  
   if  
    ;
    ;
    ;
   else
    ;
    ;
    ;
     ;
   end
  else
   if  
    ;
    ;
    ;
   else
    ;
    ;
    ;
    ;
   end
  end
end
end
Examine column N, return which is the largest;
5.2.3. Experimental Results

In this experiment, the parameters are set as follows: the population size N is 50, the positive constant c is 0.1, and the max iteration is 1000. Based on the proposed IHDETBO algorithm, case data, and objective function, the experimental result are shown in Tables 4 and 5. Table 4 shows the production time of the 50 schemes which are the corresponding individuals or integer arrays in the last generation. We can conclude that the individual No. 20 is the best one, and the production time is 444.208. The subtask amount of every CS in the best scheme which is indicated as integer in the corresponding genebit of individual No. 20 is shown in Table 5.

Figure 5 shows the batch division and transportation scheme in detail. The circle indicates the candidate CS for every subtask, and the number inside indicates the corresponding subtask amount. The arrow direction indicates the delivery destination for the next subtask and the number on which indicates the delivery amount. The circles marked in yellow color indicate the CS whose time to complete its own task is the latest in one subtask, and the related solid line indicated the production line which takes the longest time. So, we can conclude that the cloud manufacturing task has been divided into several small batches executed parallel and hybrid. Obviously, thanks to the MBSPHE-CSCCM, the production time is reduced a lot.

6. Conclusions and Future Work

With the intense competition in the global market and increasingly serious energy and environmental issues, the integration and sharing of manufacturing resources have been becoming more and more important in manufacturing industry. As one of the new manufacturing paradigms, CMfg has been proposed to address to these problems and has gradually been in focus. In practice, CMfg is a large-scale networked distributed manufacturing. The manufacturing resources which always scatter all over the world have the characteristics of massive, heterogeneous, complexity, and coarse granularity. Besides, the transportation among them is very complex because of today advanced logistic. Generally speaking, CMfg is a typical complex system in complex environment, and the manufacturing resources are encapsulated as CS. In complex system, because the total amount of task may be very large, the problem of service composition also becomes very complex. In this paper, we begin with a discussion of the state-of-the-art CMfg and then introduce the manufacturing scheme named as MBSPHE-CSCCM, in which a mass task can be transformed into multi-batch subtasks which will be parallel-hybrid executed. To address the service composition problem for MBSPHE-CSCCM, a novel optimization method IHDETBO for MBSPHE-CSCCM is proposed. This method can be divided into two phases: the first phase is IDE phase, based on basic concept and operation of DE, factors F and CR are both improved and calculated with adaptive strategy to enhance the population diversity and generate better individuals; the second phase is IT phase, the teacher phase of classical TLBO is adopted, and factor TF will be also improved to make the simulation more in line with the actual. In addition, to adapt the special condition of CMfg, block operation including block encoding and initialization, block mutation, block crossover, block selection, and block teaching operation are also proposed. Finally, with the simulation experiments and a case study, we demonstrate the advantage of the proposed method. MBSPHE-CSCCM plays a very important role in CMfg. Besides, there are also many other problems of CMfg that need to be studied, such as task decomposition, the evaluation of CS QoS, CS selection based on performance matching, and so on, which deserves our further consideration.

Data Availability

The experimental data and case study data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was partly supported by the National Natural Science Foundation of China (Grants nos. 61701443, 61876168, and 61403342) and Zhejiang Provincial Natural Science Foundation of China (LY18F030020).