#### Abstract

Based on the classical MapReduce concept, we propose an extended MapReduce scheduling model. In the extended MapReduce scheduling problem, we assumed that each job contains an *open-map task* (the *map task* can be divided into multiple unparallel operations) and *series-reduce tasks* (each *reduce task* consists of only one operation). Different from the classical MapReduce scheduling problem, we also assume that all the operations cannot be processed in parallel, and the machine settings are unrelated machines. For solving the extended MapReduce scheduling problem, we establish a mixed-integer programming model with the minimum makespan as the objective function. We then propose a genetic algorithm, a simulated annealing algorithm, and an *L*-*F* algorithm to solve this problem. Numerical experiments show that *L*-*F* algorithm has better performance in solving this problem.

#### 1. Introduction

For meeting the unpredictable demand, manufacturing companies often acquire more manufacturing equipment. Scheduling jobs to be processed on these manufacturing equipment increase the complexity of business operations [1]. Job scheduling is mainly used to assist decision makers in deciding the processing sequence of jobs and the allocation of machines. Many scholars have studied the classical scheduling problems, such as single-machine scheduling problem and parallel-machine scheduling problem. It is worth noting that the combination of MapReduce job and machine scheduling has attracted more and more scholars’ attention in recent years. In fact, MapReduce system is a popular method of big data processing [2]. MapReduce generally consists of one map task and one reduce task, and the reduce task is strictly behind the map task. Normally, the map task can be arbitrarily split into multiple map operations, and all the map operations can be processed in parallel on multiple parallel machines, while the reduce tasks are unsplittable; thus, the reduce tasks can only be processed by one machine. In the classical MapReduce scheduling problem, each job consists of the map tasks and the reduce tasks, and the reduce task is strictly behind the map task [3]. Normally, the reduce task contains only one operation.

Many current studies on MapReduce scheduling problem assume that the available machines are identical parallel machines, i.e., for the same operation, the processing speed of machines is the same. In actual production and scheduling environment, production models of MapReduce-like systems are widely used. For example, the manufacturing and manual assembly of parts and components of some valuable watches can be regarded as the production mode of MapReduce. Another example, after finishing the basic production of a high-grade cheongsam, it often requires manual embroidery by multiple processors. In the process of embroidery, many embroidered patterns are often not processed in sequence. After the embroidery is finished, it needs to be ironed by one of the processors and then be packed. From embroidery to packaging, this cheongsam can be regarded as the extended MapReduce-like job. Undoubtedly, MapReduce scheduling has a strong practical significance.

However, in the actual situation, there is a kind of jobs similar to MapReduce job. It may be easier for us to describe their features in terms of MapReduce terminology. Thus, we will mention such jobs as *extended MapReduce jobs*. Specifically, each extended MapReduce job consists of one *open-map task* and several *series-reduce tasks*. Different from the classical MapReduce job, the extended MapReduce job cannot be processed in parallel. The operations of open-map task of the extended MapReduce job can be processed by any machine without strict sequential processing sequence (like operations of an open shop job), but cannot be processed in parallel. This is similar to open shop, so we call it *open-map task*. We say each open-map task of a job can be divided into multiple map operations. The expanded MapReduce job also consists of several *series-reduce tasks*. In this paper, we mainly study the expanded MapReduce scheduling problem.

We consider an extended MapReduce scheduling problem with the open-map task and series-reduce tasks, motivated by their practical fixed partition applications. There are three main differences between our research and the classical MapReduce scheduling problem:(1)We assume that the parallel machines are unrelated, while the machines are identical in the classical MapReduce model(2)In the extended MapReduce scheduling problem, map operations (i.e., a partition of the open-map task) cannot be processed in parallel. In the extended MapReduce scheduling problem, the division of an open-map task, i.e., map operations, is known a priori(3)We extend the classical MapReduce scheduling problem by assuming that there are several series-reduce tasks during processing.

In the extended MapReduce scheduling problem, each job consists of two kinds of tasks: one open-map task and several series-reduce tasks. There is no strict requirement of processing sequence for the map operations (like operations of one job in open shop), but there is strict requirement of processing sequence for the reduce operations (each operation corresponds to one series-reduce task). Moreover, it is obvious that the extended MapReduce scheduling problem is an NP-hard problem, because it is an extension of the parallel machine makespan minimization problem.

To address the extended MapReduce scheduling problem, we establish a mixed-integer programming model with the minimum makespan as the objective function. We then solve it by CPLEX, a genetic algorithm, and a simulated annealing algorithm, respectively. Besides, we design a heuristic algorithm which we called *L*-*F* algorithm to improve the solution efficiency. We compare *L*-*F* algorithm with the genetic algorithm and the simulated annealing algorithm to verify its performance.

The studies of MapReduce scheduling problem often assume that all the jobs are processed on identical parallel machines [4, 5]. First, in the case of online study based on this assumption, [6] considers a two-stage classical flexible flow shop problem based on MapReduce system, and give an online 6-approximation and an online (1 + *ϵ*)-speed *O* (1/*ϵ*^{4})-competitive algorithm. Assuming that the reduce task is unknown before the map task is completed, [7] considers the preemptive and nonpreemptive processing scenarios and prove that no matter what the processing scenario is the competition ratio of any algorithm is at least 2 − 1/*n*. For improving system utilization for batch jobs, [8] proposes a modified MapReduce architecture that allows data to be pipelined between operators. Jiang et al. [9] propose an online algorithm which can optimize their MapReduce problem, where the map operations are fractional and the reduce operations are preemptive. The paper [10] studies an online MapReduce scheduling problem on two uniform machines and discusses the competitive ratio with preemption or nonpreemption, respectively.

Second, in the online scenario, [11] solves an online job scheduling problem with flexible deadline bounds via a receding horizon control algorithm. The study in [12] expands the MapReduce tasks by allowing user specify a job’s deadline and try to make the job finished before the deadline. Li et al. [13] study a MapReduce periodical batch scheduling problem with makespan minimization and propose three heuristic algorithms to solve the problem. Fotakis et al. [14] propose an approximation algorithm for the nonpreemptive MapReduce scheduling. They also verify the performance of the algorithm through numerical experiments. Most studies in the field of MapReduce scheduling have only focused on the improvement of algorithms for the classical MapReduce scheduling problem. However, few researches expand the classical MapReduce scheduling problem according to its practical applications.

In this work, we consider an extended MapReduce scheduling problem with the open-map task and several series-reduce tasks. In this problem, we assume that the open-map task of each job is splittable (with fixed partition pattern), and all the series-reduce tasks are unsplittable in processing. Specifically, the operations of the open-map task of one job are given beforehand. These operations must be performed in sequence, but without precedence relation. We establish a mixed-integer programming model with the minimum makespan as the objective function then adopt CPLEX to solve it. We design a heuristic algorithm called *L*-*F* algorithm for this problem and compare it with the baseline heuristic algorithms. Finally, we found that *L*-*F* algorithm has advantages through numerical experiments.

The main contributions of this work are threefold:(1)Different from the classical MapReduce scheduling problem, we consider series-reduce tasks (in this work, we are mainly considering two series-reduce tasks). The map operations cannot be processed in parallel.(2)We consider the processing scenario in which each job is processed on a set of unrelated parallel machines with different processing speeds.(3)We establish a mixed-integer programming model; then, we adopt CPLEX and two classical heuristic algorithms to solve this problem. Besides, we design an *L*-*F* algorithm to solve this problem more efficiently.

The remainder of this paper is organized as follows. Section 2 mathematically models the problem and solves the problem via CPLEX. In order to solve this problem in large scale, Section 3 designs heuristic algorithms (a genetic algorithm, a simulated annealing algorithm, and an *L*-*F* algorithm). Section 4 compares the designed *L*-*F* algorithm with genetic algorithm and simulated annealing algorithm. Section 5 concludes the paper and gives future research directions.

#### 2. Problem Statement and Mathematical Model

There are *n* jobs to be processed on the *m* unrelated machines. Each job contains *k* operations. There are one open-map task and several series-reduce tasks. The operations 1 to (*k* − *r*) represent the map operations of the open-map task of each job. The operations (*k* − *r* + 1) to *k* represent reduce operations of the series-reduce tasks of the job. The specific characteristics of a job with two series-reduce tasks are shown in Figure 1. The processing time of operation (1 ≤ ≤ *k*) for job *j* by machine *i* is denoted by . In addition, the processing of any two (no matter map or reduce) operations of a job cannot be overlapped. That is to say, any job cannot be processed on more than one machine at the same time. The other processing rules are as follows. We aim to produce a processing schedule so as to minimize the makespan.

The problem under consideration is based on the assumptions that setup time of machine and transportation time of job between operations are not considered. In this work, we propose scheduling methods to minimize the makespan for this problem. Without loss of generality, we assume that in a reasonable schedule, there may be idle time between the processing times of any two consecutive operations on a machine. Below, we propose a mixed-integer linear programming model for this problem. For convenience, we first explain the basic symbols and decision variables and then build the model.

##### 2.1. Basic Notations

###### 2.1.1. Indices

*i*: index of machines *j*: index of jobs : index of all operations of a job *h*: index of positions on a machine : index of reduce operations of a job *s*: index of processing orders of operations of a job

###### 2.1.2. Input Parameters

: set of machines, and = {1, 2, ..., *m*} *J*: set of jobs, and *J* = {1, 2, ..., *n*} *r*: the number of reduce operations : set of operations, and = {1, 2, ..., *k* − *r*, *k* − *r* + 1, ..., *k*} *H*: set of positions on a machine, and *H* = {1, 2, ..., } *M*: a sufficiently large positive integer : the processing time of operation of job *j* when processed by machine *i*

###### 2.1.3. Decision Variables

: a binary variable equal to 1 if operation of job *j* begins to be processed by machine *i* at position *h*, 0 otherwise. : a binary variable equal to 1 if operation of job *j* is processed by machine *i*, 0 otherwise. : a binary variable equal to 1 if operation in job *j* is processed in the order of *s*, 0 otherwise. Note that if ∈ [*r* + 1, *k*], *s* must be equal to , because they correspond to series-reduce tasks. : the completion time of the operation of job *j*. : the maximum completion time of operation of job *j*. : the time when operation of job *j* began to be processed. : the processing time of operation of job *j*.

##### 2.2. Mathematical Model

The objective is to minimize the makespan, as shown in the following formula:subject to

Constraints (2)–(4) guarantee that each operation can only be processed by at most one machine at a time, and any operation must be completed. Constraints (5)–(7) guarantee that any operation cannot be processed in parallel. Constraint (8) calculates the actual processing time of operation of job *j*. Constraints (9)–(11) calculate the start processing time of any operation and describe the relationship between the start processing time of two operations on the same machine. Constraints (12)-(13) calculate the completion time of any operation and the makespan. Constraints (14)-(15) calculate the start processing time of the reduce operation. Constraints (16)-(17) limit the ranges of the variables.

#### 3. Heuristic Algorithms

As the problem is NP-hard, we propose a genetic algorithm and a simulated annealing algorithm to solve it. Since the job with two reduce tasks are common in practice, we solve the extended MapReduce scheduling problem with two reduce tasks via CPLEX and heuristic algorithms. Otherwise, a new heuristic algorithm, i.e., *L*-*F* algorithm, is proposed to solve this problem more efficiently.

##### 3.1. Genetic Algorithm

Genetic algorithms are computational models to simulate the natural selection and genetic mechanism of Darwin’s biological evolution theory. It is a method to search the optimal solution by simulating the natural evolution process. Genetic algorithms have been widely used in scheduling problems [15, 16] and other NP-complete problems [17]. In genetic algorithms, each chromosome represents a feasible solution to the problem studied. In this section, for solving this problem, we generate a certain number of initial feasible solutions, i.e., initial population, and then through cyclic selection of the parents, we get offspring through the process of crossover and mutation of the parents’ chromosomes, constantly optimizing to find the optimal solution.

###### 3.1.1. Chromosome Representation

In this study, each chromosome can be divided into two parts on average. All the elements of the first part represent all the operations of all the jobs, which are called operation genes. All the elements of the second part represent the number of machines, which are called machine genes. For example, in Figure 2, the first element of the chromosome is 21, representing operation 1 of job 2, and the tenth element (the first machine gene) is 1, representing machine 1. The operation genes correspond to the machine genes one by one. For example, in Figure 2, the first operation gene is 21, and the first machine gene is 1, which means that operation 1 of job 2 is processed by machine 1.

In addition, on each chromosome, if both operations belong to a job, the start processing time in the subsequent operation is later than the completion time of the preceding operation. Similarly, on each chromosome, if two operations are assigned to the same machine, the start time of the subsequent operation is later than the completion time of the previous operation.

###### 3.1.2. Initialization of Chromosomes

We randomly generate chromosomes according to the number of jobs, the number of machines, and the processing rules. For example, in Figure 2, in the first part of chromosomes, operation 2 of job 2 (22) and operation 1 of job 1 (11) correspond to machine 2. However, operation 2 of job 2 is ahead of operation 1 of job 1. That is to say, the starting time of operation 1 of job 1 must be after the completion time of operation 2 of job 2.

###### 3.1.3. Fitness Function

In the genetic algorithm, fitness determines the probability that a chromosome is selected as a parent chromosome. We assume that NIND is the number of individuals in the population, *j* is the *j*th chromosome in the population, and obj (*j*) is the corresponding objective value of chromosome *j* in the population. Since this study aims to minimize the objective function, we define the fitness value of chromosome *j* as *1*/obj (*j*).

###### 3.1.4. Crossover and Mutation

In genetic algorithm, after mutation and crossover, the offspring cannot only retain the high-quality genes stained by the parent chromosomes, but also make the new chromosome population more diverse. However, the processes of crossover and mutation are easy to generate infeasible solution. For example, in this study, if we cross or mutate the operation genes of the chromosomes, this will easily lead to some operations being reprocessed and some operations not being processed, which is obviously inconsistent with our intention. Adopting a correcting algorithm to modify unfeasible offspring is very time-consuming; hence, it is preferable to design operators such that precedence constraints are not violated [18].

In this problem, any operation can be processed by any machine. Therefore, in order to avoid infeasible solutions, we only cross the machine genes. Since there is no crossover of the operations genes in the chromosomes, the order of operation genes in the chromosome is still feasible. It is obvious that each of the obtained offspring still represents a feasible solution. From the aspect of chromosome mutation, we randomly select a chromosome in a population. When the generated random number is less than the mutation probability, based on empirical data (the following six steps are executed with a probability of 0.1 to 1, respectively; we find that the effect is better when the probability is 0.3), we divide the genetic algorithm into the following six steps to carry out chromosome mutation based on empirical data: *Step 1*. Randomly select two machine genes on the chromosome with the probability of 0.3, and then generate them again. *Step 2*. Randomly select two jobs with the probability of 0.3, and then exchange all the operation genes of the two jobs in chromosome. The genes of the corresponding machine parts are then randomly generated. *Step 3*. Randomly select a job with the probability of 0.3, and then exchange the operation genes in chromosome that represent the map operations of the job. *Step 4*. Randomly select two jobs with the probability of 0.3. Then, randomly select one map operation from each job, under the condition that no infeasible solution is generated. Then, the map operation genes of the two jobs are changed. *Step 5*. Randomly select two jobs with the probability of 0.3, and then change the first reduce operation genes of the two jobs. *Step 6*. Randomly select two jobs with the probability of 0.3, and then change the second reduce operation genes of the two jobs.

###### 3.1.5. Termination Conditions

When the following conditions are satisfied, the genetic algorithm runs out. (1) The number of iterations reaches the maximum preset number; (2) the fitness value of the best chromosome in the population remains unchanged in the last 100 consecutive generations.

##### 3.2. Simulated Annealing Algorithm

Simulated annealing algorithms (SA) are local search methods [19]. In recent years, the algorithms have been more and more widely used in scheduling problems [20]. The earliest idea of the algorithm is proposed by [21]. Simulated annealing algorithms are derived from the principle of solid annealing [22]. The solid is heated to a sufficiently high temperature and then allowed to cool slowly. When heating, the internal particles of the solid become disordered with temperature rise, and the internal energy increases, while the particles gradually decrease during cooling. It tends to be in an orderly state, reaching an equilibrium state at a temperature, and finally reaching the ground state at normal temperature. The internal energy is minimized. In this section, we solve the above problem via a simulated annealing algorithm.

###### 3.2.1. Initial Feasible Solution

We encode the initial feasible solution with 2*η* (*η* is the sum of the number of operations of all jobs) elements. In the initial feasible solution, the first *η* elements represent the operations, which are called operation elements. The last *η* elements represent the machines, which are called machine elements. The initial feasible decoding method is the same as that of chromosome coding in genetic algorithm. However, it is worth noting that in simulated annealing algorithm, we only generate an initial feasible solution. The diversity of feasible solutions is achieved by continuously generating new flexible solutions.

###### 3.2.2. New Feasible Solution

To avoid generating infeasible solutions, we transform the initial feasible solutions in the following steps based on empirical data (the following six steps are executed with a probability of 0.1 to 1, respectively; we find that the effect is better when the probability is 0.3): *Step 1*. Randomly select two machines elements from the old solution with a probability of 0.3, and then generate them randomly again. *Step 2*. With a probability of 0.3, the operation elements in the new solution are mutated by randomly selecting two jobs. All the operation elements of the two jobs are exchanged with each other in the solution, and then the elements of the corresponding machine parts are randomly generated. *Step 3*. Randomly select a job with a probability of 0.3, and then exchange the position of the operation elements in the solution of the open-map task of the job. *Step 4*. Randomly select two jobs with a probability of 0.3, and then change the elements in the solution representing the map operations of the two jobs. *Step 5*. Randomly select two jobs with a probability of 0.3, and then exchange the elements in the solution representing the first reduce operations of the two jobs. *Step 6*. Randomly select two jobs with a probability of 0.3, and then exchange the elements in the solution representing the second reduce operations of the two jobs.

Each initial feasible solution is called new feasible solution after the above steps of transformation.

###### 3.2.3. Selection

We use *R*1, *R*2, and *T*_{0} to represent the objective values of the initial feasible solution, the objective values of the new feasible solution, and current temperature, respectively. We judge whether to accept the new feasible solution according to the Metropolis criterion. That is, if the objective value of the generated new feasible solution is better than the objective value of the initial feasible solution, then accept the new feasible solution. Otherwise, it is necessary to further judge whether is greater than or equal to the number randomly generated from 0 to 1. If so, accept the new feasible solution; otherwise, refuse to accept the new feasible solution.

##### 3.3. *L*-*F* Algorithm

For solving the extended MapReduce scheduling problem, an algorithm based on Least Processing Time first (LPT) rule and First Come, First Served (FCFS) rule is proposed, which is called an *L*-*F* algorithm. We divide all the extended MapReduce jobs’ processing process into three stages. The first stage is dedicated to the open-map task, the second stage is for the first series-reduce task, and the third stage is for the second series-reduce.

When an operation is processed by a machine which takes the longest processing time to process this operation in the machine set, we call it processed in the worst case. For example, if the processing time of an operation is 10 on machine 1 and 5 on machine 2, then the processing time in the worst case is 10. In order that a job with a longer processing time in the worst case can be processed on a faster machine, in the first stage, all the map operations of all jobs are processed in the sequence of processing time under the worst case from the longest to the shortest. When starting to process each operation, we choose the machine with the minimum completion time for the operation. For the operations which belong to the second and third stages of the process, we sort them according to the principle of first come, first served. Among all the jobs, open-map task of a job is the first to be completed, and the first series-reduce task of the job is the first to start being processed. Similarly, the first series-reduce task of a job is the first to be completed, and the second series-reduce task of the job is the first to start being processed.

The specific steps of *L*-*F* algorithm are as follows: *Step 1*. According to the processing time in the worst case, the map operations are sorted in a nonincreasing order. Each operation is processed on the machine that can complete it in the shortest completion time. *Step 2*. After all the map operations of a job is completed, the first reduce operation 10 of the job starts to be processed on the available machine that can complete it in the shortest completion time. *Step 3*. After the first reduce operation of a job is completed, the second reduce operation starts to be processed on the available machine that can complete it in the shortest completion time.

#### 4. Numerical Experiments

In this section, in order to compare the performance of the *L*-*F* algorithm, the genetic algorithm, and the simulated annealing algorithm, we carry out three numerical experiments in different scales by MATLAB R2014b on a PC with Intel Core i5. This paper focuses on solving the extended MapReduce scheduling problem with two reduce tasks; thus, in numerical experiments, only two reduce tasks are included in each job.

In small-scale instances, we adopt CPLEX to solve the mathematical model. CPLEX is one of the important tools for solving mixed linear programming problems [23]. In the genetic algorithm, based on parameter of experiments, we assume that the initial population number is 100, the number of iterations is 200, the crossover rate is 0.3, and the mutation rate is 0.7. In the simulated annealing algorithm, we assume that the initial temperature is 1000, the minimum temperature is 0.001, the cooling rate is 0.1, and the number of iterations at each temperature is 100. In addition, we assume that the solution value obtained by one heuristic algorithm is obj, and the optimal solution obtained by CPLEX is obj; then, there is gap = .

##### 4.1. Small-Scale Instances

We first carry out numerical experiments on small-scale instances. In each instance, we first randomly generate 10 sets of processing time data for operations of all jobs from 1 to 5. We then calculate the average of the objective values, the gap value, and the running time of CPLEX, the genetic algorithm, the simulated annealing algorithm, and *L*-*F* algorithm; the results are shown in Table 1. Besides, we also randomly generate 10 sets of processing time data for operations of all jobs from 1 to 10 for each instance. Similarly, we then calculate the average of the objective values, the gap values, and the running times of CPLEX and the three heuristic algorithms; the results are shown in Table 2.

From the small-scale instances, it can be seen that when the range of the processing time of operations is enlarged from (1, 5) to (1, 10), the running time of CPLEX increases significantly. While the running time of the genetic algorithm, the simulated annealing algorithm, and *L*-*F* algorithm does not change significantly. From the aspect of gap value, when the range of the processing time of operations is enlarged, the gap values of the genetic algorithm and simulated annealing algorithm decrease slightly. However, the gap value of *L*-*F* algorithm increases slightly with the increase of processing time of all the operations.

Besides, Tables 1 and 2 report that the running times of all the three heuristic algorithms are significantly less than CPLEX. *L*-*F* algorithm is slightly better than the genetic algorithm and the simulated annealing algorithm in solution quality and computational time. In addition, the genetic algorithm is better than the simulated annealing algorithm in terms of running time. The gap value of the genetic algorithm is slightly smaller than that of the simulated annealing algorithm. Specifically, when the number of machines is 2, the genetic algorithm and the simulated annealing algorithm are better than *L*-*F* algorithm in solution quality. In general, gap values of the genetic algorithm and the simulated annealing algorithm are proportional to the number of machines and jobs. However, with the increase of the number of machines and jobs, the gap value of *L*-*F* algorithm decreases in general.

##### 4.2. Large-Scale Instances

In this section, for comparing the performance of the genetic algorithm, the simulated annealing algorithm, and *L*-*F* algorithm, we carry out numerical experiments on large scale. In this numerical experiment, we also randomly generate 10 sets of data in a range of 1 to 10 of processing time of operations of all jobs under the different scale of machines and jobs. Due to the fact that CPLEX takes a lot of time to solve the problem, we calculate gap value by replacing the exact objective value with the minimum objective value obtained by the three heuristic algorithms. In Table 3, we further expand the scale of jobs and machines to compare the three heuristic algorithms.

From Table 3, we can see that *L*-*F* algorithm has better performance than the genetic algorithm and the simulated annealing algorithm in large-scale instances. The gap value of the genetic algorithm and the simulated annealing algorithm is proportional to the number of operations. Otherwise, when the number of jobs and operations is constant, the gap values of the genetic algorithm and the simulated annealing algorithm are proportional to the number of machines.

Table 3 also reports that when the number of machines increases form 4 to 6, the gap values of the genetic algorithm and the simulated annealing algorithm increase significantly. However, when the number of operations of each job and machines is constant, with the increase of the number of jobs, the gap value of *L*-*F* algorithm decreases, while the gap value of the genetic algorithm and the simulated annealing algorithm does not change significantly.

#### 5. Conclusion

In this paper, we study an extended MapReduce scheduling model with one open-map task and several series-reduce tasks. Different from the classical MapReduce scheduling problem, we assume that the map operations cannot be processed in parallel and the available machines are unrelated machines. We then propose a genetic algorithm and a simulated annealing algorithm and design *L*-*F* algorithm to solve the problem. Finally, compared with the genetic algorithm and the simulated annealing algorithm, *L*-*F* algorithm has better performance in large-scale instances.

We propose two future research directions for studying this problem. First, when different operations or jobs are processed on the same machine, there may be setup time. Future research on this problem can take setup time into account. Second, considering preemptive processing, this may be more realistic in some actual production scenarios.

#### Data Availability

The parameter setting data involved are available, and the data involved in the numerical experiments are all obtained from the solver and also included within the article.

#### Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

#### Acknowledgments

This work was supported by the National Natural Science Foundation of China (Grant nos. 71771048, 71832001, 71531011, and 71571134) and the Fundamental Research Funds for the Central Universities (Grant no. 2232018H-07).