Computational Intelligence and Neuroscience

Computational Intelligence and Neuroscience / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 6563498 | 26 pages | https://doi.org/10.1155/2017/6563498

Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

Academic Editor: Reinoud Maex
Received26 Oct 2016
Revised06 Jan 2017
Accepted01 Mar 2017
Published04 Apr 2017

Abstract

The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

1. Introduction

Metaheuristic techniques, especially the Bee Colony Optimization Algorithm, can be easily adapted to solve a larger number of NP-hard combinatorial optimization problems by combining other methods. The metaheuristic method can be divided into local search methods and global search methods. Local search methods such as tabu search, simulated annealing, and the Nelder-Mead Methods are used to exploit search space of the problem while global search methods such as scatter search, genetic algorithms, and Bee Colony Optimization focus on the exploration of the search space area [1]. Exploitation is the process of intensifying the search space; this method repeatedly restarts searching for each time from a different initial solution. Exploration is the process of diversifying the search space to evade trapping in a local optimum. A hybrid method is used to obtain a balance between exploration and exploitation by introducing local search within global search to obtain a robust solution for the NRP. In a previous study, the genetic algorithm was chosen for global search and simulated annealing for a local search to solve the NRP in [2].

In swarm intelligence, the natural behavior of organisms will follow a simple basic rule to structure their environment. The agents will not have any centralized structure to control other individuals; it uses the local interactions among the agents to determine the complex global behavior of the agents [3]. Some of the inspired natural behavior of swarm intelligence comprises bird flocking, ant colony, fish schooling, and animal herding methods. The various algorithms include the ant colony optimization algorithm, genetic algorithm, and the particle swarm optimization algorithm [46]. The natural foraging behavior of honey bees has inspired bee algorithm. All honey bees will start to collect nectar from various sites around their new hive, and the process of finding out the best nectar site is done by the group decision of honey bees. The mode of communication among the honey bees is carried out by the process of the waggle dance to inform hive mates about the location of rich food sources. Some of the algorithms which follow the waggle dance of communication performed by scout bees about the nectar site are bee system, Bee Colony Optimization [7], and Artificial Bee Colony [8].

The Directed Bee Colony (DBC) Optimization Algorithm [9] is inspired by the group decision-making process of bee behavior for the selection of the nectar site. The group decision process includes consensus and quorum methods. Consensus is the process of vote agreement, and the voting pattern of the scouts is monitored. The best nest site is selected once the quorum (threshold) value is reached. The experimental result shows that the algorithm is robust and accurate for generating the unique solution. The contribution of this research article is the use of a hybrid Directed Bee Colony Optimization with the Nelder-Mead Method for effective local search. The authors have adapted MODBCO for solving multiobjective problems which integrate the following processes: At first a deterministic local search method, Modified Nelder-Mead, is used to obtain the provisional optimal solution. Then a multiagent particle system environment is used in the exploration and decision-making process for establishing a new colony and nectar site selection. Only few honey bees were active in the process of decision-making, so the energy conservation of the swarm is highly achievable.

The Nurse Rostering Problem (NRP) is a staff scheduling problem that intends to assign a set of nurses to work shifts to maximize hospital benefit by considering a set of hard and soft constraints like allotment of duty hours, hospital regulations, and so forth, This nurse rostering is a delicate task of finding combinatorial solutions by satisfying multiple constraints [10]. Satisfying the hard constraint is mandatory in any scheduling problem, and a violation of any soft constraints is allowable but penalized. To achieve an optimal global solution for the problem is impossible in many cases [11]. Many algorithmic techniques such as metaheuristic method, graph-based heuristics, and mathematical programming model have been proposed to solve automated scheduling problems and timetabling problems over the last decades [12, 13].

In this work, the effectiveness of the hybrid algorithm is compared with different optimization algorithms using performance metrics such as error rate, convergence rate, best value, and standard deviation. The well-known combinatorial scheduling problem, NRP, is chosen as the test bed to experiment and analyze the effectiveness of the proposed algorithm.

This paper is organized as follows: Section 2 presents the literature survey of existing algorithms to solve the NRP. Section 3 highlights the mathematical model and the formulation of hard and soft constraints of the NRP. Section 4 explains the natural behavior of honey bees to handle decision-making process and the Modified Nelder-Mead Method. Section 5 describes the development of the metaheuristic approach, and the effectiveness of the MODBCO algorithm to solve the NRP is demonstrated. Section 6 confers the computational experiments and the analysis of results for the formulated problem. Finally, Section 7 provides the summary of the discussion and Section 8 will conclude with future directions of the research work.

2. Literature Review

Berrada et al. [19] considered multiple objectives to tackle the nurse scheduling problem by considering various ordered soft constraints. The soft constraints are ordered based on priority level, and this determines the quality of the solution. Burke et al. [20] proposed a multiobjective Pareto-based search technique and used simulated annealing based on a weighted-sum evaluation function towards preferences and a dominated-based evaluation function towards the Pareto set. Many mathematical models are proposed to reduce the cost and increase the performance of the task. The performance of the problem greatly depends on the type of constraints used [21]. Dowsland [22] proposed a technique of chain moves using a multistate tabu search algorithm. This algorithm exchanges the feasible and infeasible search space to increase the transmission rate when the system gets disconnected. But this algorithm fails to solve other problems in different search space instances.

Burke et al. [23] proposed a hybrid tabu search algorithm to solve the NRP in Belgian hospitals. In their constraints, the authors have added the previous roster along with hard and soft constraints. To consider this, they included heuristic search strategies in the general tabu search algorithm. This model provides flexibility and more user control. A hyperheuristic algorithm with tabu search is proposed for the NRP by Burke et al. [24]. They developed a rule based reinforcement learning, which is domain specific, but it chooses a little low-level heuristic to solve the NRP. The indirect genetic algorithm is problem dependent which uses encoding and decoding schemes with genetic operator to solve NRP. Burke et al. [25] developed a memetic algorithm to solve the nurse scheduling problem, and the authors have compared memetic and tabu search algorithm. The experimental result shows a memetic algorithm outperforms with better quality than the genetic algorithm and tabu search algorithm.

Simulated annealing has been proposed to solve the NRP. Hadwan and Ayob [26] introduced a shift pattern approach with simulated annealing. The authors have proposed a greedy constructive heuristic algorithm to generate the required shift patterns to solve the NRP at UKMMC (Universiti Kebangsaan Malaysia Medical Centre). This methodology will reduce the complexity of the search space solution to generate a roster by building two- or three-day shift patterns. The efficiency of this algorithm was shown by experimental results with respect to execution time, performance considerations, fairness, and the quality of the solution. This approach was capable of handling all hard and soft constraints and produces a quality roster pattern. Sharif et al. [27] proposed a hybridized heuristic approach with changes in the neighborhood descent search algorithm to solve the NRP at UKMMC. This heuristic is the hybridization of cyclic schedule with noncyclic schedule. They applied repairing mechanism, which swaps the shifts between nurses to tackle the random shift arrangement in the solution. A variable neighborhood descent search algorithm (VNDS) is used to change the neighborhood structure using a local search and generate a quality duty roster. In VNDS, the first neighborhood structure will reroster nurses to different shifts and the second neighborhood structure will do repairing mechanism.

Aickelin and Dowsland [28] proposed a technique for shift patterns; they considered shift patterns with penalty, preferences, and number of successive working days. The indirect genetic algorithm will generate various heuristic decoders for shift patterns to reconstruct the shift roster for the nurse. A qualified roster is generated using decoders with the help of the best permutations of nurses. To generate best search space solutions for the permutation of nurses, the authors used an adaptive iterative method to adjust the order of nurses as scheduled one by one. Asta et al. [29] and Anwar et al. [30] proposed a tensor-based hyperheuristic to solve the NRP. The authors tuned a specific group of datasets and embedded a tensor-based machine learning algorithm. A tensor-based hyperheuristic with memory management is used to generate the best solution. This approach is considered in life-long applications to extract knowledge and desired behavior throughout the run time.

Todorovic and Petrovic [31] proposed the Bee Colony Optimization approach to solve the NRP; all the unscheduled shifts are allocated to the available nurses in the constructive phase. This algorithm combines the constructive move with local search to improve the quality of the solution. For each forward pass, the predefined numbers of unscheduled shifts are allocated to the nurses and discarded the solution with less improvement in the objective function. The process of intelligent reduction in neighborhood search had improved the current solution. In construction phase, unassigned shifts are allotted to nurses and lead to violation of constraints to higher penalties.

Several methods have been proposed using the INRC2010 dataset to solve the NRP; the authors have considered five latest competitors to measure the effectiveness of the proposed algorithm. Asaju et al. [14] proposed Artificial Bee Colony (ABC) algorithm to solve NRP. This process is done in two phases; at first heuristic based ordering of shift pattern is used to generate the feasible solution. In the second phase, to obtain the solution, ABC algorithm is used. In this method, premature convergence takes place, and the solution gets trapped in local optima. The lack of a local search algorithm of this process leads to yielding higher penalty. Awadallah et al. [15] developed a metaheuristic technique hybrid artificial bee colony (HABC) to solve the NRP. In ABC algorithm, the employee bee phase was replaced by a hill climbing approach to increase exploitation process. Use of hill climbing in ABC generates a higher value which leads to high computational time.

The global best harmony search with pitch adjustment design is used to tackle the NRP in [16]. The author adapted the harmony search algorithm (HAS) in exploitation process and particle swarm optimization (PSO) in exploration process. In HAS, the solutions are generated based on three operator, namely, memory consideration, random consideration, and pitch adjustment for the improvisation process. They did two improvisations to solve the NRP, multipitch adjustment to improve exploitation process and replaced random selection with global best to increase convergence speed. The hybrid harmony search algorithm with hill climbing is used to solve the NRP in [17]. For local search, metaheuristic harmony and hill climbing approach are used. The memory consideration parameter in harmony is replaced by PSO algorithm. The derivative criteria will reduce the number of iterations towards local minima. This process considers many parameters to construct the roster since improvisation process is to be at each iteration.

Santos et al. [18] used integer programming (IP) to solve the NRP and proposed monolith compact IP with polynomial constraints and variables. The authors have used both upper and lower bounds for obtaining optimal cost. They estimated and improved lower bound values towards optimum, and this method requires additional processing time.

3. Mathematical Model

The NRP problem is a real-world problem at hospitals; the problem is to assign a predefined set of shifts (like S1-day shift, S2-noon shift, S3-night shift, and S4-Free-shift) of a scheduled period for a set of nurses of different preferences and skills in each ward. Figure 1 shows the illustrative example of the feasible nurse roster, which consists of four shifts, namely, day shift, noon shift, night shift, and free shift (holiday), allocating five nurses over 11 days of scheduled period. Each column in the scheduled table represents the day and the cell content represents the shift type allocated to a nurse. Each nurse is allocated one shift per day and the number of shifts is assigned based on the hospital contracts. This problem will have some variants on a number of shift types, nurses, nurse skills, contracts, and scheduling period. In general, both hard and soft constraints are considered for generating and assessing solutions.

Hard constraints are the regulations which must be satisfied to achieve the feasible solution. They cannot be violated since hard constraints are demanded by hospital regulations. The hard constraints HC1 to HC5 must be filled to schedule the roster. The soft constraints SC1 to SC14 are desirable, and the selection of soft constraints determines the quality of the roster. Tables 1 and 2 list the set of hard and soft constraints considered to solve the NRP. This section describes the mathematical model required for hard and soft constraints extensively.


Hard constraints

HC1All demanded shifts assigned to a nurse.
HC2A nurse can work with only a single shift per day.
HC3The minimum number of nurses required for the shift.
HC4The total number of working days for the nurse should be between the maximum and minimum range.
HC5A day shift followed by night shift is not allowed.


Soft constraints

SC1The maximum number of shifts assigned to each nurse.
SC2The minimum number of shifts assigned to each nurse.
SC3The maximum number of consecutive working days assigned to each nurse.
SC4The minimum number of consecutive working days assigned to each nurse.
SC5The maximum number of consecutive working days assigned to each nurse on which no shift is allotted.
SC6The minimum number of consecutive working days assigned to each nurse on which no shift is allotted.
SC7The maximum number of consecutive working weekends with at least one shift assigned to each nurse.
SC8The minimum number of consecutive working weekends with at least one shift assigned to each nurse.
SC9The maximum number of weekends with at least one shift assigned to each nurse.
SC10Specific working day.
SC11Requested day off.
SC12Specific shift on.
SC13Specific shift off.
SC14Nurse not working on the unwanted pattern.

The NRP consists of a set of nurses , where each row is specific to particular set of shifts , for the given set day . The solution roster for the matrix dimension is as in

HC1. In this constraint, all demanded shifts are assigned to a nurse.where is the number of nurses required for a day at shift and is the allocation of nurses in the feasible solution roster.

HC2. In this constraint, each nurse can work not more than one shift per day:where is the allocation of nurses in solution at shift for a day .

HC3. This constraint deals with a minimum number of nurses required for each shift.where is the minimum number of nurses required for a shift on the day

HC4. In this constraint, the total number of working days for each nurse should range between minimum and maximum range for the given scheduled period.

The average working shift for nurse can be determined by using where and are the minimum and maximum number of days in scheduled period and is the average working shift of the nurse.

HC5. In this constraint, shift 1 followed by shift 3 is not allowed; that is, a day shift followed by a night shift is not allowed.

SC1. The maximum number of shifts assigned to each nurse for the given scheduled period is as follows:where is the maximum number of shifts assigned to nurse .

SC2. The minimum number of shifts assigned to each nurse for the given scheduled period is as follows:where is the minimum number of shifts assigned to nurse .

SC3. The maximum number of consecutive working days assigned to each nurse on which a shift is allotted for the scheduled period is as follows:where is the maximum number of consecutive working days of nurse , is the total number of consecutive working spans of nurse in the roster, and is the count of the th working spans of nurse .

SC4. The minimum number of consecutive working days assigned to each nurse on which a shift is allotted for the scheduled period is as follows:where is the minimum number of consecutive working days of nurse , is the total number of consecutive working spans of nurse in the roster, and is the count of the th working span of the nurse .

SC5. The maximum number of consecutive working days assigned to each nurse on which no shift is allotted for the given scheduled period is as follows:where is the maximum number of consecutive free days of nurse , is the total number of consecutive free working spans of nurse in the roster, and is the count of the th working span of the nurse .

SC6. The minimum number of consecutive working days assigned to each nurse on which no shift is allotted for the given scheduled period is as follows:where is the minimum number of consecutive free days of nurse , is the total number of consecutive free working spans of nurse in the roster, and is the count of the th working span of the nurse .

SC7. The maximum number of consecutive working weekends with at least one shift assigned to nurse for the given scheduled period is as follows:where is the maximum number of consecutive working weekends of nurse , is the total number of consecutive working weekend spans of nurse in the roster, and is the count of the th working weekend span of the nurse .

SC8. The minimum number of consecutive working weekends with at least one shift assigned to nurse for the given scheduled period is as follows:where is the minimum number of consecutive working weekends of nurse , is the total number of consecutive working weekend spans of nurse in the roster, and is the count of the th working weekend span of the nurse .

SC9. The maximum number of weekends with at least one shift assigned to nurse in four weeks is as follows:where is the number of working days at the th weekend of nurse , is the maximum number of working days for nurse , and is the total count of the weekend in the scheduling period of nurse .

SC10. The nurse can request working on a particular day for the given scheduled period.where is the day request from the nurse to work on any shift on a particular day .

SC11. The nurse can request that they do not work on a particular day for the given scheduled period.where is the request from the nurse not to work on any shift on a particular day .

SC12. The nurse can request working on a particular shift on a particular day for the given scheduled period. where is the shift request from the nurse to work on a particular shift on particular day .

SC13. The nurse can request that they do not work on a particular shift on a particular day for the given scheduled period.where is the shift request from the nurse not to work on a particular shift on particular day .

SC14. The nurse should not work on unwanted pattern suggested for the scheduled period. where is the total count of occurring patterns for nurse of type ; is the set of unwanted patterns suggested for the nurse .

The objective function of the NRP is to maximize the nurse preferences and minimize the penalty cost from violations of soft constraints in (22).

Here SC refers to the set of soft constraints indexed in Table 2, refers to the penalty weight violation of the soft constraint, and refers to the total violations of the soft constraints in roster solution. It has to be noted that the usage of penalty function [32] in the NRP is to improve the performance and provide the fair comparison with another optimization algorithm.

4. Bee Colony Optimization

4.1. Natural Behavior of Honey Bees

Swarm intelligence is an emerging discipline for the study of problems which requires an optimal approach rather than the traditional approach. The use of swarm intelligence is the part of artificial intelligence based on the study of the behavior of social insects. The swarm intelligence is composed of many individual actions using decentralized and self-organized system. Swarm behavior is characterized by natural behavior of many species such as fish schools, herds of animals, and flocks of birds formed for the biological requirements to stay together. Swarm implies the aggregation of animals such as birds, fishes, ants, and bees based on the collective behavior. The individual agents in the swarm will have a stochastic behavior which depends on the local perception of the neighborhood. The communication between any insects can be formed with the help of the colonies, and it promotes collective intelligence among the colonies.

The important features of swarms are proximity, quality, response variability, stability, and adaptability. The proximity of the swarm must be capable of providing simple space and time computations, and it should respond to the quality factors. The swarm should allow diverse activities and should not be restricted among narrow channels. The swarm should maintain the stability nature and should not fluctuate based on the behavior. The adaptability of the swarm must be able to change the behavior mode when required. Several hundreds of bees from the swarm work together to find nesting sites and select the best nest site. Bee Colony Optimization is inspired by the natural behavior of bees. The bee optimization algorithm is inspired by group decision-making processes of honey bees. A honey bee searches the best nest site by considering speed and accuracy.

In a bee colony there are three different types of bees, a single queen bee, thousands of male drone bees, and thousands of worker bees.(1)The queen bee is responsible for creating new colonies by laying eggs.(2)The male drone bees mated with the queen and were discarded from the colonies.(3)The remaining female bees in the hive are called worker bees, and they are called the building block of the hive. The responsibilities of the worker bees are to feed, guard, and maintain the honey bee comb.

Based on the responsibility, worker bees are classified as scout bees and forager bees. A scout bee flies in search of food sources randomly and returns when the energy gets exhausted. After reaching a hive scout bees share the information and start to explore rich food source locations with forager bees. The scout bee’s information includes direction, quality, quantity, and distance of the food source they found. The way of communicating information about a food source to foragers is done using dance. There are two types of dance, round dance and waggle dance. The round dance will provide direction of the food source when the distance is small. The waggle dance indicates the position and the direction of the food source; the distance can be measured by the speed of the dance. A greater speed indicates a smaller distance; and the quantity of the food depends on the wriggling of the bee. The exchange of information among hive mates is to acquire collective knowledge. Forager bees will silently observe the behavior of scout bee to acquire knowledge about the directions and information of the food source.

The group decision process of honey bees is for searching best food source and nest site. The decision-making process is based on the swarming process of the honey bee. Swarming is the process in which the queen bee and half of the worker bees will leave their hive to explore a new colony. The remaining worker bees and daughter bee will remain in the old hive to monitor the waggle dance. After leaving their parental hive, swarm bees will form a cluster in search of the new nest site. The waggle dance is used to communicate with quiescent bees, which are inactive in the colony. This provides precise information about the direction of the flower patch based on its quality and energy level. The number of follower bees increases based on the quality of the food source and allows the colony to gather food quickly and efficiently. The decision-making process can be done in two methods by swarm bees to find the best nest site. They are consensus and quorum; consensus is the group agreement taken into account and quorum is the decision process taken when the bee vote reaches a threshold value.

Bee Colony Optimization (BCO) algorithm is a population-based algorithm. The bees in the population are artificial bees, and each bee finds its neighboring solution from the current path. This algorithm has a forward and backward process. In forwarding pass, every bee starts to explore the neighborhood of its current solution and enables constructive and improving moves. In forward pass, entire bees in the hive will start the constructive move and then local search will start. In backward pass, bees share the objective value obtained in the forward pass. The bees with higher priority are used to discard all nonimproving moves. The bees will continue to explore in next forward pass or continue the same process with neighborhood. The flowchart for BCO is shown in Figure 2. The BCO is proficient in solving combinatorial optimization problems by creating colonies of the multiagent system. The pseudocode for BCO is described in Algorithm 1. The bee colony system provides a standard well-organized and well-coordinated teamwork, multitasking performance [33].

Bee Colony Optimization
() Initialization: Assign every bee to an empty solution.
() Forward Pass
    For every bee
  () set ;
  () Evaluate all possible construction moves.
  () Based on the evaluation, choose one move using Roulette Wheel.
  () if () Go to step ()
       where is the counter for construction move and is the number of construction moves during one forward
       pass.
() Return to Hive.
() Backward Pass starts.
() Compute the objective function for each bee and sort accordingly.
() Calculate probability or logical reasoning to continue with the computed solution and become recruiter bee.
() For every follower, choose the new solution from recruiters.
() If stopping criteria is not met Go to step ().
() Evaluate and find the best solution.
() Output the best solution.
4.2. Modified Nelder-Mead Method

The Nelder-Mead Method is a simplex method for finding a local minimum function of various variables and is a local search algorithm for unconstrained optimization problems. The whole search area is divided into different fragments and filled with bee agents. To obtain the best solution, each fragment can be searched by its bee agents through Modified Nelder-Mead Method (MNMM). Each agent in the fragments passes information about the optimized point using MNMM. By using NMMM, the best points are obtained, and the best solution is chosen by decision-making process of honey bees. The algorithm is a simplex-based method, -dimensional simplex is initialized with vertices, that is, two dimensions, and it forms a triangle; if it has three dimensions, it forms a tetrahedron. To assign the best and worst point, the vertices are evaluated and ordered based on the objective function.

The best point or vertex is considered to the minimum value of the objective function, and the worst point is chosen with a maximum value of the computed objective function. To form simplex new vertex function values are computed. This method can be calculated using four procedures, namely, reflection, expansion, contraction, and shrinkage. Figure 3 shows the operators of the simplex triangle in MNMM.

The simplex operations in each vertex are updated closer to its optimal solution; the vertices are ordered based on fitness value and ordered. The best vertex is , the second best vertex is , and the worst vertex is calculated based on the objective function. Let be the vertex in a triangle as food source points; , and are the positions of the food source points, that is, local optimal points. The objective functions for , , and are calculated based on (23) towards the food source points.

The objective function to construct simplex to obtain local search using MNMM is formulated as

Based on the objective function value the vertices food points are ordered ascending with their corresponding honey bee agents. The obtained values are ordered as with their honey bee position and food points in the simplex triangle. Figure 4 describes the search of best-minimized cost value for the nurse based on objective function (22). The working principle of Modified Nelder-Mead Method (MNMM) for searching food particles is explained in detail.(1)In the simplex triangle the reflection coefficient , expansion coefficient , contraction coefficient , and shrinkage coefficient are initialized.(2)The objective function for the simplex triangle vertices is calculated and ordered. The best vertex with lower objective value is , the second best vertex is , and the worst vertex is named as , and these vertices are ordered based on the objective function as .(3)The first two best vertices are selected, namely, and , and the construction proceeds with calculating the midpoint of the line segment which joins the two best vertices, that is, food positions. The objective function decreases as the honey agent associated with the worst position vertex moves towards best and second best vertices. The value decreases as the honey agent moves towards to and to . It is feasible to calculate the midpoint vertex by the line joining best and second best vertices using (4)A reflecting vertex is generated by choosing the reflection of worst point . The objective function value for is which is calculated, and it is compared with worst vertex objective function value . If proceed with step (), the reflection vertex can be calculated using (5)The expansion process starts when the objective function value at reflection vertex is lesser than worst vertex , , and the line segment is further extended to through and . The vertex point is calculated by (26). If the objective function value at is lesser than reflection vertex , , then the expansion is accepted, and the honey bee agent has found best food position compared with reflection point. (6)The contraction process is carried out when and for replacing with . If then the direct contraction without the replacement of with is performed. The contraction vertex can be calculated using If , the contraction can be done and replaced with ; go to step () or else proceed to step ().(7)The shrinkage phase proceeds when the contraction process at step () fails and is done by shrinking all the vertices of the simplex triangle except using (28). The objective function value of reflection and contraction phase is not lesser than the worst point; then the vertices and must be shrunk towards Thus the vertices of smaller value will form a new simplex triangle with another two best vertices. (8)The calculations are stopped when the termination condition is met.

Algorithm 2 describes the pseudocode for Modified Nelder-Mead Method in detail. It portraits the detailed process of MNMM to obtain the best solution for the NRP. The workflow of the proposed MNMM is explained in Figure 5.

Modified Nelder-Mead Method for directed honey bee food search
() Initialization:
     denotes the list of vertices in simplex, where .
    , , and are the coefficients of reflection, expansion, contraction and shrinkage.
     is the objective function to be minimized.
() Ordering:
    Order the vertices in simplex from lowest objective function value ) to highest value ). Ordered as
    .
() Midpoint:
    Calculate the midpoint for first two best vertices in simplex , where .
() Reflection Process:
    Calculate reflection point by, .
    if then
     and Go to to Step ().
    end if
() Expansion Process:
    if ) then
    Calculate expansion point using,
    end if
    if ) then
     and Go to to Step ().
    else
     and Go to to Step ().
    end if
() Contraction Process:
    if then
    Compute outside contraction by, .
    end if
    if ) then
    Compute inside contraction by, .
    end if
    if ) then
    Contraction is done between and the best vertex among and .
    end if
    if ) then
     and Go to to Step ().
    else goes to Step ().
    end if
    if ) then
     and Go to to Step ().
    else Go to to Step ().
    end if
() Shrinkage Process:
    Shrink towards the best solution with new vertices by, , where .
() Stopping Criteria:
    Order and re-label new vertices of the simplex based on their objective function and go to step ().

5. MODBCO

Bee Colony Optimization is the metaheuristic algorithm to solve various combinatorial optimization problems, and it is inspired by the natural behavior of bee for their food sources. The algorithm consists of two steps, forward and backward pass. During forwarding pass, bees started to explore the neighborhood of its current solution and find all possible ways. In backward pass, bees return to the hive and share the values of the objective function of their current solution. Calculate nectar amount using probability function and advertise the solution; the bee which has the better solution is given higher priority. The remaining bees based on the probability value decide whether to explore the solution or proceed with the advertised solution. Directed Bee Colony Optimization is the computational system where several bees work together in uniting and interact with each other to achieve goals based on the group decision process. The whole search area of the bee is divided into multiple fragments; different bees are sent to different fragments. The best solution in each fragment is obtained by using a local search algorithm Modified Nelder-Mead Method (MNMM). To obtain the best solution, the total varieties of individual parameters are partitioned into individual volumes. Each volume determines the starting point of the exploration of food particle by each bee. The bees use developed MNMM algorithm to find the best solution by remembering the last two best food sites they obtained. After obtaining the current solution, the bee starts to backward pass, sharing of information obtained during forwarding pass. The bees started to share information about optimized point by the natural behavior of bees called waggle dance. When all the information about the best food is shared, the best among the optimized point is chosen using a decision-making process called consensus and quorum method in honey bees [34, 35].

5.1. Multiagent System

All agents live in an environment which is well structured and organized. In multiagent system, several agents work together and interact with each other to obtain the goal. According to Jiao and Shi [36] and Zhong et al. [37] all agents should possess the following qualities: agents should live and act in an environment, each agent should sense its local environment, each agent should be capable of interacting with other agents in a local environment, and agents attempt to perform their goal. All agents interact with each other and take the decision to achieve the desired goals. The multiagent system is a computational system and provides an opportunity to optimize and compute all complex problems. In multiagent system, all agents start to live and act in the same environment which is well organized and structured. Each agent in the environment is fixed on a lattice point. The size and dimension of the lattice point in the environment depend upon the variables used. The objective function can be calculated based on the parameters fixed.(1)Consider “” number of independent parameters to calculate the objective function. The range of the th parameter can be calculated using [], where is the initial value of the th parameter and is the final value of the th parameter chosen.(2)Thus the objective function can be formulated as number of axes; each axis will contain a total range of single parameter with different dimensions.(3)Each axis is divided into smaller parts; each part is called a step. So th axis can be divided into number of steps each with the length of , where the value of depends upon parameters; thus to . The relationship between and can be given as(4)Then each axis is divided into branches, for each branch number of branches will form an -dimensional volume. Total number of volumes can be formulated using (5)The starting point of the agent in the environment, which is one point inside volume, is chosen by calculating the midpoint of the volume. The midpoint of the lattice can be calculated as

5.2. Decision-Making Process

A key role of the honey bees is to select the best nest site and is done by the process of decision-making to produce a unified decision. They follow a distributed decision-making process to find out the neighbor nest site for their food particles. The pseudocode for the proposed MODBCO algorithm is shown in Algorithm 3. Figure 6 explains the workflow of the proposed algorithm for the search of food particles by honey bees using MODBCO.

Multi-Objective Directed Bee Colony Optimization
() Initialization:
   is the objective function to be minimized.
  Initialize number of parameters and length of steps where to .
  Initialize initial value and the final value of the parameter as and .
   Solution Representation
  The solutions are represented in the form of Binary values, which can be generated as follows
  For each solution :
           
  End for
() The number of steps in each step can be calculated using
                      
() The total number of volumes can be calculated using
                      
() The midpoint of the volume to calculate starting point of the exploration can be calculated using
                  
() Explore the search volume according to the Modified Nelder-Mead Method using Algorithm 2.
() The recorded value of the optimized point in vector table using
                    
() The globally optimized point is chosen based on Bee decision-making process using Consensus and Quorum
  method approach
                 
5.2.1. Waggle Dance

The scout bees after returning from the search of food particle report about the quality of the food site by communication mode called waggle dance. Scout bees perform the waggle dance to other quiescent bees to advertise their best nest site for the exploration of food source. In the multiagent system, each agent after collecting individual solution gives it to the centralized systems. To select the best optimal solution for minimal optimal cases, the mathematical formulation can be stated as

This mathematical formulation will find the minimal optimal cases among the search solution, where is the search value calculated by the agent. The search values are recorded in the vector table ; is the vector which consists of number of elements. The element contains the value of the parameter; both optimal solution and parameter values are recorded in the vector table.

5.2.2. Consensus

The consensus is the widespread agreement among the group based on voting; the voting pattern of the scout bees is monitored periodically to know whether it reached an agreement and started acting on the decision pattern. Honey bees use the consensus method to select the best search value; the globally optimized point is chosen by comparing the values in the vector table. The globally optimized points are selected using the mathematical formulation

5.2.3. Quorum

In quorum method, the optimum solution is calculated as the final solution based on the threshold level obtained by the group decision-making process. When the solution reaches the optimal threshold level , then the solution is considered as a final solution based on unison decision process. The quorum threshold value describes the quality of the food particle result. When the threshold value is less the computation time decreases, but it leads to inaccurate experimental results. The threshold value should be chosen to attain less computational time with an accurate experimental result.

6. Experimental Design and Analysis

6.1. Performance Metrics

The performance of the proposed algorithm MODBCO is assessed by comparing with five different competitor methods. Here six performance metrics are considered to investigate the significance and evaluate the experimental results. The metrics are listed in this section.

6.1.1. Least Error Rate

Least Error Rate (LER) is the percentage of the difference between known optimal value and the best value obtained. The LER can be calculated using

6.1.2. Average Convergence

The Average Convergence is the measure to evaluate the quality of the generated population on average. The Average Convergence (AC) is the percentage of the average of the convergence rate of solutions. The performance of the convergence time is increased by the Average Convergence to explore more solutions in the population. The Average Convergence is calculated using where is the number of instances in the given dataset.

6.1.3. Standard Deviation

Standard deviation (SD) is the measure of dispersion of a set of values from its mean value. Average Standard Deviation is the average of the standard deviation of all instances taken from the dataset. The Average Standard Deviation (ASD) can be calculated using where is the number of instances in the given dataset.

6.1.4. Convergence Diversity

The Convergence Diversity (CD) is the difference between best convergence rate and worst convergence rate generated in the population. The Convergence Diversity can be calculated usingwhere is the convergence rate of best fitness individual and is the convergence rate of worst fitness individual in the population.

6.1.5. Cost Diversion

Cost reduction is the difference between known cost in the NRP Instances and the cost obtained from our approach. Average Cost Diversion (ACD) is the average of cost diversion to the total number of instances taken from the dataset. The value of ACR can be calculated fromwhere is the number of instances in the given dataset.

6.2. Experimental Environment Setup

The proposed Directed Bee Colony algorithm with the Modified Nelder-Mead Method to solve the NRP is illustrated briefly in this section. The main objective of the proposed algorithm is to satisfy multiobjective of the NRP as follows:(a)Minimize the total cost of the rostering problem.(b)Satisfy all the hard constraints described in Table 1.(c)Satisfy as many soft constraints described in Table 2.(d)Enhance the resource utilization.(e)Equally distribute workload among the nurses.

The Nurse Rostering Problem datasets are taken from the First International Rostering Competition (INRC2010) by PATAT-2010, a leading conference in Automated Timetabling [38]. The INRC2010 dataset is divided based on its complexity and size into three tracks, namely, sprint, medium, and long datasets. Each track is divided into four types as early, late, hidden, and hint with reference to the competition INRC2010. The first track sprint is the easiest and consists of 10 nurses, 33 datasets which are sorted as 10 early types, 10 late types, 10 hidden types, and 3 hint type datasets. The scheduling period is for 28 days with 3 to 4 contract types, 3 to 4 daily shifts, and one skill specification. The second track is a medium which is more complex than sprint track, and it consists of 30 to 31 nurses, 18 datasets which are sorted as 5 early types, 5 long types, 5 hidden types, and 3 hint types. The scheduling period is for 28 days with 3 to 4 contract types, 4 to 5 daily shifts, and 1 to 2 skill specifications. The most complicated track is long with 49 to 40 nurses and consists of 18 datasets which are sorted as 5 early types, 5 long types, 5 hidden types, and 3 hint types. The scheduling period for this track is 28 days with 3 to 4 contract types, 5 daily shifts, and 2 skill specifications. The detailed description of the datasets available in the INRC2010 is shown in Table 3. The datasets are classified into twelve cases based on the size of the instances and listed in Table 4.


TrackTypeInstanceNursesSkillsShiftsContractsUnwanted patternShift offDay offWeekendTime period

SprintEarly01–1010144321-01-2010 to 28-01-2010
Hidden01-0210133421-06-2010 to 28-06-2010
03, 05, 0810143821-06-2010 to 28-06-2010
04, 0910143821-06-2010 to 28-06-2010
06, 0710133421-01-2010 to 28-01-2010
1010143821-01-2010 to 28-01-2010
Late01, 03–0510143821-01-2010 to 28-01-2010
0210133421-01-2010 to 28-01-2010
06, 07, 1010143021-01-2010 to 28-01-2010
08101430××21-01-2010 to 28-01-2010
09101430××2, 31-01-2010 to 28-01-2010
Hint01, 0310143821-01-2010 to 28-01-2010
0210143021-01-2010 to 28-01-2010

MediumEarly01–0531144021-01-2010 to 28-01-2010
Hidden01–04302549××21-06-2010 to 28-06-2010
05302549××21-06-2010 to 28-06-2010
Late0130144721-01-2010 to 28-01-2010
02, 0430143721-01-2010 to 28-01-2010
0330144021-01-2010 to 28-01-2010
0530254721-01-2010 to 28-01-2010
Hint01, 0330144721-01-2010 to 28-01-2010
0230144721-01-2010 to 28-01-2010

LongEarly01–0549253321-01-2010 to 28-01-2010
Hidden01–04502539××2, 31-06-2010 to 28-06-2010
05502539××2, 31-06-2010 to 28-06-2010
Late01, 03, 05502539××2, 31-01-2010 to 28-01-2010
02, 04502549××2, 31-01-2010 to 28-01-2010
Hint01502539××2, 31-01-2010 to 28-01-2010
02, 03502537××21-01-2010 to 28-01-2010


SI numberCaseTrackType

1Case SprintEarly
2Case SprintHidden
3Case SprintLate
4Case SprintHint
5Case MiddleEarly
6Case MiddleHidden
7Case MiddleLate
8Case MiddleHint
9Case LongEarly
10Case LongHidden
11Case LongLate
12Case LongHint

Table 3 describes the detailed description of the datasets; columns one to three are used to index the dataset to track, type, and instance. Columns four to seven will explain the number of available nurses, skill specifications, daily shift types, and contracts. Column eight explains the number of unwanted shift patterns in the roster. The nurse preferences are managed by shift off and day off in columns nine and ten. The number of weekend days is shown in column eleven. The last column indicates the scheduling period. The symbol “” shows there is no shift off and day off with the corresponding datasets.

Table 4 shows the list of datasets used in the experiment, and it is classified based on its size. The datasets present in case to case are smaller in size, case to case are considered to be medium in size, and the larger sized dataset is classified from case to case .

The performance of MODBCO for NRP is evaluated using INRC2010 dataset. The experiments are done on different optimization algorithms under similar environment conditions to assess the performance. The proposed algorithm to solve the NRP is coded using MATLAB 2012 platform under Windows on an Intel 2 GHz Core 2 quad processor with 2 GB of RAM. Table 3 describes the instances considered by MODBCO to solve the NRP. The empirical evaluations will set the parameters of the proposed system. Appropriate parameter values are determined based on the preliminary experiments. The list of competitor methods chosen to evaluate the performance of the proposed algorithm is shown in Table 5. The heuristic parameter and the corresponding values are represented in Table 6.


TypeMethodReference

Artificial Bee Colony Algorithm[14]
Hybrid Artificial Bee Colony Algorithm[15]
Global best harmony search[16]
Harmony Search with Hill Climbing[17]
Integer Programming Technique for NRP[18]


TypeMethod

Number of bees100
Maximum iterations1000
Initialization techniqueBinary
HeuristicModified Nelder-Mead Method
Termination conditionMaximum iterations
Run20
Reflection coefficient
Expansion coefficient
Contraction coefficient
Shrinkage coefficient

6.3. Statistical Analysis

Statistical analysis plays a major role in demonstrating the performance of the proposed algorithm over existing algorithms. Various statistical tests and measures to validate the performance of the algorithm are reviewed by Demšar [39]. The authors used statistical tests like ANOVA, Dunnett test, and post hoc test to substantiate the effectiveness of the proposed algorithm and help to differentiate from existing algorithms.

6.3.1. ANOVA Test

To validate the performance of the proposed algorithm, ANOVA (Analysis of Variance) is used as the statistical analysis tool to demonstrate whether one or more solutions significantly vary [40]. The authors used one-way ANOVA test [41] to show significance in proposed algorithm. One-way ANOVA is used to validate and compare differences between various algorithms. The ANOVA test is performed with 95% confidence interval, the significant level of 0.05. In ANOVA test, the null hypothesis is tested to show the difference in the performance of the algorithms. If the obtained significance value is less than the critical value (0.05), then the null hypothesis is rejected, and thus the alternate hypothesis is accepted. Otherwise, the null hypothesis is accepted by rejecting the alternate hypothesis.

6.3.2. Duncan’s Multiple Range Test

After the null hypothesis is rejected, to explore the group differences post hoc or multiple comparison test is performed. Duncan developed a procedure to test and compare all pairs in multiple ranges [42]. Duncan’s multiple range test (DMRT) classifies the significant and nonsignificant difference between any two methods. This method ranks in terms of mean values in increasing or decreasing order and group method which is not significant.

6.4. Experimental and Result Analysis

In this section, the effectiveness of the proposed algorithm MODBCO is compared with other optimization algorithms to solve the NRP using INRC2010 datasets under similar environmental setup, using performance metrics as discussed. To compare the results produced by MODBCO seems to be more competitive with previous methods. The performance of MODBCO is comparable with previous methods listed in Tables 718. The computational analysis on the performance metrics is as follows.


InstancesOptimal valueMODBCOM1M2M3M4M5
BestWorstBestWorstBestWorstBestWorstBestWorstBestWorst

Sprint early 0156567563745781597558775877
Sprint early 0258597766895980618264886085
Sprint early 0351516860835275547059845367
Sprint early 0459597768866076638067846185
Sprint early 0558587465865977607963866084
Sprint early 0654536959815579577358755677
Sprint early 0756567562815879607561855778
Sprint early 0856567659795879598058825778
Sprint early 0955558259795776598061785680
Sprint early 1052526758765473557558785376
Sprint hidden 0132324845673457436046653355
Sprint hidden 0232325141593455375344683351
Sprint hidden 0362627974926386719078966284
Sprint hidden 046666817910267918096781006691
Sprint hidden 0559587368906079638469885977
Sprint hidden 06134128146164187131145203219169188132150
Sprint hidden 07153154172181204154175197215187210155174
Sprint hidden 08204201219246265205225267286240260206224
Sprint hidden 09338337353372390339363274291372389340365
Sprint hidden 10306306324328347307330347368322342308324
Sprint late 0137375350683857466652723957
Sprint late 0242416153744361507156804467
Sprint late 0348456557805073577660774972
Sprint late 04757187881087595106127951157499
Sprint late 0544466652654668537457744670
Sprint late 0642426046654468456452704362
Sprint late 0742446351694669628155784468
Sprint late 0817173617371941193919401739
Sprint late 0917173318371942194017401740
Sprint late 1043435956744567567454714362
Sprint hint 01787392851037791103119901087799
Sprint hint 0247435957764868618056814568
Sprint hint 0357496774965275799769935366
Medium early 01240245263262284247267247262280305250269
Medium early 02240243262263280247270247263281301250273
Medium early 03236239256261281244264244262287311245269
Medium early 04237245262259278242261242258278297247272
Medium early 05303310326331351310332310329330351313338
Medium hidden 01123143159190210157180157177410429192214
Medium hidden 02243230248286306256277256273412430266284
Medium hidden 033753706684567856691822036181
Medium hidden 0481851041021199511995114168191100124
Medium hidden 05130182201202224178202178197520545194214
Medium late 01157176195207227175198175194234257179194
Medium late 0218304553763252325349673555
Medium late 0329355271903960395959784466
Medium late 0435425866834957497071894870
Medium late 05107129149179199135156135156272293141164
Medium hint 0140426270904971496564825071
Medium hint 028491107142162951169511513315896115
Medium hint 03129135153188209141161141152187208148166
Long early 01197194209241259200222200221339362220243
Long early 02219228245276293232254232253399419253275
Long early 03240240258268291243257243262349366251273
Long early 04303303321336356306324306320411430319342
Long early 05284284300326347287310287308383403301321
Long hidden 0134638940744446340342240342144664488422442
Long hidden 029010812613215012013912013910711094128148
Long hidden 033848636178547254751631815479
Long hidden 042227454971325432511131323658
Long hidden 054155717899598159701391576083
Long late 01235249267290310260278260276588606262286
Long late 02229261280295318265278265281577595263281
Long late 03220259275307325264285264285567588272295
Long late 04221257292304323263284263281604627276294
Long late 058392107142161104122104125329349118138
Long hint 013140585373446744651261505072
Long hint 021729474062325532511221453661
Long hint 0353791371171358510485101278303102123

(a) ANOVA test

Source factor: best value
Sum of squaresdfMean squareSig.

Between groups10619495212389.83.6206810.003
Within groups2393335440858660.18
Total24995303413

(b) DMRT test

Duncan test: best value
Method Subset for alpha = 0.05
12

MODBCO69120.2319
M269124.1304
M569128.087
M369129.3478
M169143.1594
M469263.5507
Sig.0.6291.000


CaseMODBCOM1M2M3M4M5

Case 11.542.545.779.412.88
Case 20.161.6919.8122.350.83
Case 18.275.6323.2424.722.26
Case 20.0333.4818.53
Case 0.009.572.732.7316.313.93
Case 46.3727.7127.71220.4440.62
Case 28.00105.4037.9837.98116.3645.82
Case 3.9363.2614.9714.9754.4318.00
Case 0.5217.142.152.1554.048.61
Case 15.5569.7036.2536.25652.4743.25
Case 9.1640.0818.1318.13185.9223.41
Case 49.56109.0163.5263.52449.5488.50

(a) ANOVA test

Source factor: error rate
Sum of squaresdfMean squareSig.

Between groups638680.95127736.179615.261820.000
Within groups34148204088369.657384
Total4053501413

(b) DMRT test

Duncan test: error rate
Method Subset for alpha = 0.05
12

MODBCO695.402238
M26913.77936
M56917.31724
M36920.99903
M16936.49033
M469121.1591
Sig.0.075591.000


CaseMODBCOM1M2M3M4M5

Case 87.0170.8469.7972.4062.5966.87
Case 91.9665.8876.9364.1956.8877.21
Case 83.4053.6347.2338.2330.0747.44
Case 99.0262.9677.2345.9453.1477.20
Case 96.2786.4991.1092.7177.2989.01
Case 79.4942.5252.8759.0839.82
Case 56.3424.5226.239.97
Case 84.0021.7261.6768.5122.9558.22
Case 96.9878.8191.6892.5939.7884.36
Case 70.168.1629.9737.6618.54
Case 84.5854.1073.4974.4066.95
Case 15.13

(a) ANOVA test

Source factor: Average Convergence
Sum of squaresdfMean squareSig.

Between groups712642.4755142528.49515.470470.0000
Within groups3758878.264089212.9369
Total4471520.73413

(b) DMRT test

Duncan test: Average Convergence
Method Subset for alpha = 0.05
12

M469
M16946.4245
M56953.6879
M36957.6296
M26959.5103
MODBCO6981.6997
Sig.1.000.054


CaseMODBCOM1M2M3M4M5

Case 6.5013828.5512859.832558.64520310.574029.239032
Case 5.2752818.17086411.331258.47208310.553617.382535
Case 5.9477498.4749288.8980588.33781111.939138.033767
Case 5.2704175.31044310.246129.508219.0928518.049586
Case 7.0359459.4000257.79099110.2842313.4524311.46945
Case 7.1577910.135789.1097799.99543113.541858.020126
Case 7.5029479.0692559.9746098.34137411.5013810.60612
Case 7.8615939.7991987.44710610.0513811.6337810.61275
Case 9.31062613.834539.43794312.1318813.5666810.54568
Case 10.4240415.1414111.8723510.5954914.1579712.96181
Case 10.2545414.6892410.3309110.3038611.2107113.3541
Case 10.684613.4954610.7547815.4079114.4251411.3357

(a) ANOVA test

Source factor: Average Standard Deviation
Sum of squaresdfMean squareSig.

Between groups697.44075139.488113.63220.000
Within groups4174.7640810.23226
Total4872.201413

(b) DMRT test

Duncan test: Average Standard Deviation
Method Subset for alpha = 0.05
123

MODBCO697.4743
M2699.615152
M3699.677027
M5699.729477
M16910.13242
M46911.9316
Sig.1.000.3941.00


CaseMODBCOM1M2M3M4M5

Case 30.0035.2537.2832.8337.9438.83
Case 22.9627.9229.1823.8527.9527.62
Case 53.0556.2164.6659.2960.7965.05
Case 29.9934.0333.6230.8439.4633.32
Case 7.017.878.336.718.779.29
Case 20.8922.2126.9819.2925.4524.87
Case 43.6954.6648.1355.4750.2456.18
Case 27.6730.0331.8324.1130.3529.69
Case 6.898.108.228.048.249.10
Case 37.1044.2945.5338.9541.9149.98
Case 11.4311.6410.8111.3611.9112.15
Case 91.1375.9681.7869.9086.6385.88

(a) ANOVA test

Source factor: Convergence Diversity
Sum of squaresdfMean squareSig.

Between groups514.47585102.89529.2871680.000
Within groups4520.34840811.07928
Total5034.824413

(b) DMRT test

Duncan test: Average Standard Deviation
Method Subset for alpha = 0.05
123

M36918.363232
MODBCO6918.42029
M16919.68116
M26920.57971
M46920.68116
M56921.2464
Sig.0.9190.0961


CaseMODBCOM1M2M3M4M5

Case 0.006.401.403.205.201.60
Case 21.200.8019.6021.900.80
Case 8.101.8010.6011.000.90
Case 11.3320.3311.00
Case 5.2024.006.806.8040.009.80
Case 15.8046.4025.6025.60215.6039.80
Case 13.2046.0016.8016.8067.8020.20
Case 5.0049.0010.6710.6743.6713.67
Case 1.2040.805.005.00127.6020.20
Case 18.0045.4026.2026.20100.0032.60
Case 26.0070.0033.6033.60335.4040.60
Case 15.6736.3320.0020.00141.6729.00

(a) ANOVA test

Source factor: Average Cost Diversion
Sum of squaresdfMean squareSig.

Between groups10619495212389.84.9199850.000
Within groups1761286740843168.79
Total18674816413

(b) DMRT test

Duncan test: Average Cost Diversion
Method Subset for alpha = 0.05
12

MODBCO696.202899
M26910.10145
M56914.05797
M36915.31884
M16929.13043
M469149.5217
Sig.0.5735581

6.4.1. Best Value

The results obtained by MODBCO with competitive methods are shown in Table 7. The performance is compared with previous methods; the number in the table refers to the best solution obtained using the corresponding algorithm. The objective of NRP is the minimization of cost; the lowest values are the best solution attained. In the evaluation of the performance of the algorithm, the authors have considered 69 datasets with diverse size. It is apparently shown that MODBCO accomplished 34 best results out of 69 instances.

The statistical analysis tests ANOVA and DMRT for best values are shown in Table 8. It is perceived that the significance values are less than 0.05, which shows the null hypothesis is rejected. The significant difference between various optimization algorithms is observed. The DMRT test shows the homogenous group; two homogeneous groups for best values are formed among competitor algorithms.

6.4.2. Error Rate

The evaluation based on the error rate shows that our proposed MODBCO yield lesser error rate compared to other competitor techniques. The computational analysis based on error rate (%) is shown in Table 9 and out of 33 instances in sprint type, 18 instances have achieved zero error rate. For sprint type dataset, 88% of instances have attained a lesser error rate. For medium and larger sized datasets, the obtained error rate is 62% and 44%, respectively. A negative value in the column indicates corresponding instances have attained lesser optimum valve than specified in the INRC2010.

The Competitors M2 and M5 generated better solutions at the initial stage; as the size of the dataset increases they could not be able to find the optimal solution and get trapped in local optima. The error rate (%) obtained by using MODBCO with different algorithms is shown in Figure 7.

The statistical analysis on error rate is presented in Table 10. In ANOVA test, the significance value is 0.000 which is less than 0.05, showing rejection of the null hypothesis. Thus, there is a significant difference in value with respect to various optimization algorithms. The DMRT test indicates two homogeneous groups formed from different optimization algorithms with respect to the error rate.

6.4.3. Average Convergence

The Average Convergence of the solution is the average fitness of the population to the fitness of the optimal solution. The computational results with respect to Average Convergence are shown in Table 11. MODBCO shows 90% convergence rate in small size instances and 82% convergence rate in medium size instances. For longer instances, it shows 77% convergence rate. Negative values in the column show the corresponding instances get deviated from optimal solution and trapped in local optima. It is observed that with increase in the problem size convergence rate reduces and becomes worse in many algorithms for larger instances as shown in Table 11. The Average Convergence rate attained by various optimization algorithms is depicted in Figure 8.

The statistical test result for Average Convergence is observed in Table 12 with different optimization algorithms. From the table, it is clear that there is a significant difference in mean values of convergence in different optimization algorithms. The ANOVA test depicts the rejection of the null hypothesis since the value of significance is 0.000. The post hoc analysis test shows there are two homogenous groups among different optimization algorithms with respect to the mean values of convergence.

6.4.4. Average Standard Deviation

The Average Standard Deviation is the dispersion of values from its mean value, and it helps to deduce features of the proposed algorithm. The computed result with respect to the Average Standard Deviation is shown in Table 13. The Average Standard Deviation attained by various optimization algorithms is depicted in Figure 9.

The statistical test result for Average Standard Deviation is shown in Table 14 with different types of optimization algorithms. There is a significant difference in mean values of standard deviation in different optimization algorithms. The ANOVA test proves the null hypothesis is rejected since the value of significance is 0.00 which is less than the critical value 0.05. In DMRT test, there are three homogenous groups among different optimization algorithms with respect to the mean values of standard deviation.

6.4.5. Convergence Diversity

The Convergence Diversity of the solution is to calculate the difference between best convergence and worst convergence generated in the population. The Convergence Diversity and error rate help to infer the performance of the proposed algorithm. The computational analysis based on Convergence Diversity for MODBCO with another competitor algorithm is shown in Table 15. The Convergence Diversity for smaller and medium datasets is 58% and 50%. For larger datasets, the Convergence Diversity is 62% to yield an optimum value. Figure 10 shows the comparison of various optimization algorithms with respect to Convergence Diversity.

The statistical test of ANOVA and DMRT is observed in Table 16 with respect to Convergence Diversity. There is a significant difference in the mean values of the Convergence Diversity with various optimization algorithms. For post hoc analysis test, the significance value is 0.000 which is less than the critical value. Thus the null hypothesis is rejected. From DMRT test, the grouping of various algorithms based on mean value is shown; there are three homogenous groups among the various optimization algorithms with respect to the mean values of the cost diversity.

6.4.6. Average Cost Diversion

The computational analysis based on cost diversion shows proposed MODBCO yields less diversion in cost compared to other competitor techniques. The computational analysis with respect to Average Cost Diversion is shown in Table 17. For smaller and medium dataset 13% and 38% of instances got diverged out of which many instances yield optimum value. The larger dataset got 56% of cost divergence. A negative value in the table indicates corresponding instances have achieved new optimized values. Figure 11 depicts the comparison of various optimization algorithms with respect to Average Cost Diversion.

The statistical test of ANOVA and DMRT is observed in Table 18 with respect to Average Cost Diversion. From the table, it is inferred that there is a significant difference in the mean values of the cost diversion with various optimization algorithms. The significance value is 0.000 which is less than the critical value. Thus the null hypothesis is rejected. The DMRT test reveals there are two homogenous groups among the various optimization algorithms with respect to the mean values of the cost diversion.

7. Discussion

The experiments to solve NP-hard combinatorial Nurse Rostering Problem are conducted by our proposed algorithm MODBCO. Various existing algorithms are chosen to solve the NRP and compared with the proposed MODBCO algorithm. The results of our proposed algorithm are compared with other competitor methods, and the best values are tabulated in Table 6. To evaluate the performance of the proposed algorithm, various performance metrics are considered to evaluate the efficiency of the MODBCO. Tables 718 show the outcome of our proposed algorithm and other existing methods performance. From Tables 718 and Figures 711, it is evidently shown that MODBCO has more ability to attain the best value on performance metrics compared to competitor algorithms which use the INRC2010.

Compared with other existing methods, the mean value of MODBCO is 19% reduced towards optimum value with other competitor methods, and it attained lesser worst value in addition to the best solution. The datasets are divided based on their size as smaller, medium, and large dataset; the standard deviation of MODBCO is reduced to 4.9%, 2.22%, and 4.13%, respectively. The error rate of our proposed approach, when compared with other competitor methods with various sized datasets, reduces to 10.6% for the smaller dataset, 9.45% for the medium datasets, and 7.04% for the larger datasets. The convergence rate of MODBCO has achieved 90% for the smaller dataset, 82% for the medium dataset, and 77.37% for the larger dataset. The error rate of our proposed algorithm is reduced by 77% when compared with other competitor methods.

The proposed system is tested on larger sized datasets, and it is working astoundingly better than the other techniques. Incorporation of Modified Nelder-Mead in Directed Bee Colony Optimization Algorithm increases the exploitation strategy within the given exploration search space. This method balances the exploration and exploitation without any biased nature. Thus MODBCO converges the population towards an optimal solution at the end of each iteration. Both computational and statistical analyses show the significant performance over other competitor algorithms in solving the NRP. The computational complexity is greater due to the use of local heuristic Nelder-Mead Method. However, the proposed algorithm is better than exact methods and other heuristic approaches in solving the NRP in terms of time complexity.

8. Conclusion

This paper tackles solving the NRP using Multiobjective Directed Bee Colony Optimization Algorithm named MODBCO. To solve the NRP effectively Directed Bee Colony algorithm is chosen for global search and Modified Nelder-Mead Method for local best search. The proposed algorithm is evaluated using the INRC2010 dataset, and the performance of the proposed algorithm is compared with other five existing methods. To assess the performance of our proposed algorithm, 69 different cases of various sized datasets are chosen, and 34 out of 69 instances got the best result. Thus, our algorithm contributes with a new deterministic search and effective heuristic approach to solve the NRP. Thus MODBCO outperforms with classical Bee Colony Optimization for solving NRP by satisfying both hard and soft constraints.

The future work can be projected to(a)adapting proposed MODBCO for various scheduling and timetabling problems,(b)exploring unfeasible solution to imitate optimal solution,(c)further tuning the parameters of the proposed algorithm and measuring the exploitation and exploration strategy,(d)investigating for applying Second International INRC 2014 datasets.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This work is a part of the Research Projects sponsored by the Major Project Scheme, UGC, India, Reference nos. F.No./2014-15/NFO-2014-15-OBC-PON-3843/(SA-III/WEBSITE), dated March 2015. The authors would like to express their thanks for their financial support offered by the Sponsored Agencies.

References

  1. M. Črepinšek, S.-H. Liu, and M. Mernik, “Exploration and exploitation in evolutionary algorithms: a survey,” ACM Computing Surveys, vol. 45, no. 3, article 35, 2013. View at: Publisher Site | Google Scholar
  2. R. Bai, E. K. Burke, G. Kendall, J. Li, and B. McCollum, “A hybrid evolutionary approach to the nurse rostering problem,” IEEE Transactions on Evolutionary Computation, vol. 14, no. 4, pp. 580–590, 2010. View at: Publisher Site | Google Scholar
  3. M. Wooldridge, An Introduction to Multiagent Systems, John Wiley & Sons, 2009.
  4. E. Goldberg David, Genetic Algorithm in Search, Optimization and Machine Learning, vol. 3, Pearson Education, 1988.
  5. J. Kennedy, “Particle swarm optimization,” in Encyclopedia of Machine Learning, pp. 760–766, Springer US, 2011. View at: Google Scholar
  6. M. Dorigo, M. Birattari, and T. Stützle, “Ant colony optimization,” IEEE Computational Intelligence Magazine, vol. 1, no. 4, pp. 28–39, 2006. View at: Publisher Site | Google Scholar
  7. D. Teodorović, P. Lučić, G. Marković, and M. Dell'Orco, “Bee colony optimization: principles and applications,” in Proceedings of the 8th Seminar on Neural Network Applications in Electrical Engineering, pp. 151–156, September 2006. View at: Publisher Site | Google Scholar
  8. D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Applied Soft Computing, vol. 8, no. 1, pp. 687–697, 2008. View at: Publisher Site | Google Scholar
  9. R. Kumar, “Directed bee colony optimization algorithm,” Swarm and Evolutionary Computation, vol. 17, pp. 60–73, 2014. View at: Publisher Site | Google Scholar
  10. T. Osogami and H. Imai, “Classification of various neighborhood operations for the nurse scheduling problem,” in Proceedings of the International Symposium on Algorithms and Computation, Taipei, Taiwan, December 2000, pp. 72–83, Springer, Berlin, Germany, 2000. View at: Google Scholar
  11. H. H. Millar and M. Kiragu, “Cyclic and non-cyclic scheduling of 12 h shift nurses by network programming,” European Journal of Operational Research, vol. 104, no. 3, pp. 582–592, 1998. View at: Publisher Site | Google Scholar
  12. J. Van den Bergh, J. Beliën, P. De Bruecker, E. Demeulemeester, and L. De Boeck, “Personnel scheduling: a literature review,” European Journal of Operational Research, vol. 226, no. 3, pp. 367–385, 2013. View at: Publisher Site | Google Scholar | MathSciNet
  13. B. Cheang, H. Li, A. Lim, and B. Rodrigues, “Nurse rostering problems—a bibliographic survey,” European Journal of Operational Research, vol. 151, no. 3, pp. 447–460, 2003. View at: Publisher Site | Google Scholar | MathSciNet
  14. L. B. Asaju, M. A. Awadallah, M. A. Al-Betar, and A. T. Khader, “Solving nurse rostering problem using artificial bee colony algorithm,” in Proceedings of the 7th International Conference on Information Technology (ICIT '15), pp. 32–38, Amman, Jordan, May 2015. View at: Google Scholar
  15. M. A. Awadallah, A. L. Bolaji, and M. A. Al-Betar, “A hybrid artificial bee colony for a nurse rostering problem,” Applied Soft Computing, vol. 35, pp. 726–739, 2015. View at: Publisher Site | Google Scholar
  16. M. A. Awadallah, A. T. Khader, M. A. Al-Betar, and A. L. Bolaji, “Global best harmony search with a new pitch adjustment designed for nurse rostering,” Journal of King Saud University-Computer and Information Sciences, vol. 25, no. 2, pp. 145–162, 2013. View at: Google Scholar
  17. M. A. Awadallah, M. A. Al-Betar, A. T. Khader, A. L. Bolaji, and M. Alkoffash, “Hybridization of harmony search with hill climbing for highly constrained nurse rostering problem,” Neural Computing and Applications, vol. 28, no. 3, pp. 463–482, 2017. View at: Publisher Site | Google Scholar
  18. H. G. Santos, T. A. M. Toffolo, R. A. M. Gomes, and S. Ribas, “Integer programming techniques for the nurse rostering problem,” Annals of Operations Research, vol. 239, no. 1, pp. 225–251, 2016. View at: Publisher Site | Google Scholar | MathSciNet
  19. I. Berrada, J. A. Ferland, and P. Michelon, “A multi-objective approach to nurse scheduling with both hard and soft constraints,” Socio-Economic Planning Sciences, vol. 30, no. 3, pp. 183–193, 1996. View at: Publisher Site | Google Scholar
  20. E. K. Burke, J. Li, and R. Qu, “A Pareto-based search methodology for multi-objective nurse scheduling,” Annals of Operations Research, vol. 196, pp. 91–109, 2012. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  21. K. A. Dowsland and J. M. Thompson, “Solving a nurse scheduling problem with knapsacks, networks and tabu search,” Journal of the Operational Research Society, vol. 51, no. 7, pp. 825–833, 2000. View at: Publisher Site | Google Scholar
  22. K. A. Dowsland, “Nurse scheduling with tabu search and strategic oscillation,” European Journal of Operational Research, vol. 106, no. 2-3, pp. 393–407, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  23. E. Burke, P. De Causmaecker, and G. VandenBerghe, “A hybrid tabu search algorithm for the nurse rostering problem,” in Proceedings of the Asia-Pacific Conference on Simulated Evolution and Learning, vol. 1585, pp. 187–194, Springer, Berlin, Germany, 1998. View at: Google Scholar
  24. E. K. Burke, G. Kendall, and E. Soubeiga, “A tabu-search hyperheuristic for timetabling and rostering,” Journal of Heuristics, vol. 9, no. 6, pp. 451–470, 2003. View at: Publisher Site | Google Scholar
  25. E. Burke, P. Cowling, P. De Causmaecker, and G. V. Berghe, “A memetic approach to the nurse rostering problem,” Applied Intelligence, vol. 15, no. 3, pp. 199–214, 2001. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  26. M. Hadwan and M. Ayob, “A constructive shift patterns approach with simulated annealing for nurse rostering problem,” in Proceedings of the International Symposium on Information Technology (ITSim '10), pp. 1–6, IEEE, Kuala Lumpur, Malaysia, June 2010. View at: Publisher Site | Google Scholar
  27. E. Sharif, M. Ayob, and M. Hadwan, “Hybridization of heuristic approach with variable neighborhood descent search to solve nurse Rostering problem at Universiti Kebangsaan Malaysia Medical Centre (UKMMC),” in Proceedings of the 3rd Conference on Data Mining and Optimization (DMO '11), pp. 178–183, June 2011. View at: Publisher Site | Google Scholar
  28. U. Aickelin and K. A. Dowsland, “An indirect genetic algorithm for a nurse-scheduling problem,” Computers and Operations Research, vol. 31, no. 5, pp. 761–778, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  29. S. Asta, E. Özcan, and T. Curtois, “A tensor based hyper-heuristic for nurse rostering,” Knowledge-Based Systems, vol. 98, pp. 185–199, 2016. View at: Publisher Site | Google Scholar
  30. K. Anwar, M. A. Awadallah, A. T. Khader, and M. A. Al-Betar, “Hyper-heuristic approach for solving nurse rostering problem,” in Proceedings of the IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL '14), pp. 1–6, December 2014. View at: Publisher Site | Google Scholar
  31. N. Todorovic and S. Petrovic, “Bee colony optimization algorithm for nurse rostering,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 43, no. 2, pp. 467–473, 2013. View at: Publisher Site | Google Scholar
  32. X.-S. Yang, Nature-Inspired Meta-Heuristic Algorithms, Luniver Press, 2010.
  33. S. Goyal, “The applications survey: bee colony,” IRACST-Engineering Science and Technology, vol. 2, no. 2, pp. 293–297, 2012. View at: Google Scholar
  34. T. D. Seeley, P. Kirk Visscher, and K. M. Passino, “Group decision-making in honey bee swarms,” American Scientist, vol. 94, no. 3, pp. 220–229, 2006. View at: Publisher Site | Google Scholar
  35. K. M. Passino, T. D. Seeley, and P. K. Visscher, “Swarm cognition in honey bees,” Behavioral Ecology and Sociobiology, vol. 62, no. 3, pp. 401–414, 2008. View at: Publisher Site | Google Scholar
  36. W. Jiao and Z. Shi, “A dynamic architecture for multi-agent systems,” in Proceedings of the Technology of Object-Oriented Languages and Systems (TOOLS 31 '99), pp. 253–260, Nanjing, China, November 1999. View at: Publisher Site | Google Scholar
  37. W. Zhong, J. Liu, M. Xue, and L. Jiao, “A multi-agent genetic algorithm for global numerical optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 2, pp. 1128–1141, 2004. View at: Publisher Site | Google Scholar
  38. S. Haspeslagh, P. De Causmaecker, A. Schaerf, and M. Stølevik, “The first international nurse rostering competition 2010,” Annals of Operations Research, vol. 218, no. 1, pp. 221–236, 2014. View at: Publisher Site | Google Scholar | MathSciNet
  39. J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” Journal of Machine Learning Research, vol. 7, pp. 1–30, 2006. View at: Google Scholar | MathSciNet
  40. A. Costa, F. A. Cappadonna, and S. Fichera, “A dual encoding-based meta-heuristic algorithm for solving a constrained hybrid flow shop scheduling problem,” Computers and Industrial Engineering, vol. 64, no. 4, pp. 937–958, 2013. View at: Publisher Site | Google Scholar
  41. G. González-Rodríguez, A. Colubi, and M. Á. Gil, “Fuzzy data treated as functional data: a one-way ANOVA test approach,” Computational Statistics and Data Analysis, vol. 56, no. 4, pp. 943–955, 2012. View at: Publisher Site | Google Scholar
  42. D. B. Duncan, “Multiple range and multiple F tests,” Biometrics, vol. 11, pp. 1–42, 1955. View at: Publisher Site | Google Scholar | MathSciNet

Copyright © 2017 M. Rajeswari et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1599 Views | 605 Downloads | 6 Citations
 PDF  Download Citation  Citation
 Download other formatsMore