Research Article | Open Access
A Knowledge-Based Simulated Annealing Algorithm to Multiple Satellites Mission Planning Problems
The multiple satellites mission planning is a complex combination optimization problem. A knowledge-based simulated annealing algorithm is proposed to the multiple satellites mission planning problems. The experimental results suggest that the proposed algorithm is effective to the given problem. The knowledge-based simulated annealing method will provide a useful reference for the improvement of existing optimization approaches.
The imaging satellite plays a significant role in various fields such as the disaster prevention and environmental protection and attracts great attention of many scholars in the world [1, 2]. Although the number of satellites in orbit is increasing, the imaging satellite resources are still limited compared with the rapid growing imaging requirements [3, 4]. To improve the performance of satellite resources, the appropriate mathematical models and software tools must be employed to realize a better management and allocation of satellite resources [5, 6].
With the development of aerospace industry, the number and classification of imaging satellites are increasing. Also, both the number and the category of imaging requirements are largely increased . It is very necessary to make a comprehensive scheduling of multiple imaging satellites [8, 9]. Therefore, this paper proposes a novel method to deal with the multiple satellites mission planning problem [10, 11].
The motivation of this work can be summarized as follows. At first, the multiple satellites mission planning is a classical combination optimization problem and it is very urgent to develop some new methods to deal with this problem. At second, the interaction between evolution and learning is more popular than before, so a knowledge-based simulated annealing algorithm is designed and implemented according to this fashion.
2. Multiple Satellites Mission Planning Problems
The multiple satellites mission planning problem can be summarized as follows. Under the conditions of satisfying the resource constraints and imaging requirements, make a comprehensive scheduling of spot targets and regional targets, allocate satellite resources and working time for each target, and develop an optimized observation plan to maximize the summation of priority of completed targets. There are two subproblems to be solved in this problem. One is how to make a comprehensive scheduling of spot targets and regional targets, based on which the other one is how to optimize the observation plan of satellite resources. The multiple satellites mission planning is a complex combination optimization problem.
(1)There is only one sensor in each satellite. In fact, if there are multiple sensors in one satellite, then this satellite can be divided into multiple satellites with one sensor. For this reason, both the sensor and the satellite have the same meaning.(2)The regional target can be divided into multiple atomic tasks. In fact, a spot target just is one atomic task. In this paper, the atomic task means the subtask of task which can be observed in the time window of satellite . The regional target is divided into the following sets: where denotes the number of satellites, denotes the number of visible time windows about sensor and target , and denotes the number of atomic tasks which can be observed in the time window of satellite .(3)In order to improve the performance of imaging resources, multiple atomic tasks are integrated into one composite task, which can be observed at one time.
(1)The number of satellites is . denotes the longest working time of satellite within one single booting.(2) and denote the maximum of storage capacity and energy of satellite , and refer to the storage and energy consumption of satellite for observation in unit time.(3) denotes the maximum number of side-view imaging in each circle, is the swaying speed of satellite (degree per second), refers to the energy consumption of swaying in each degree, is the settling time of swaying of satellite .(4)The number of tasks is . Suppose the first targets are spot ones, and the last targets are regional ones. means the priority of target .(5) denotes the set of time windows about sensor and target and is the total number of visible time windows about sensor and target .(6), and refer to the starting time, ending time, and observation angle of atomic task , respectively.(7) denotes the composite task of satellite , while , and refer to the starting time, ending time, and observation angle of composite task .(8) denotes the set of composite tasks of satellite , in which is the number of composite tasks.
In the multiple satellites mission planning problem, the outputs (decision variables) can be summarized as follows:
The objective of this problem is to maximize the priority summation of fulfilled tasks. That is, where denotes the priority summation of spot targets, and it is means the priority summation of regional targets. This computation process is a little complex. At first, we should compute the overlap area between the multiple observed atomic tasks and regional target . That is, Here, denotes the polygon area of atomic task , denotes the intersection operation of multiple sets, denotes the polygon area of regional target , and denotes the union operation of different sets. At second, we should compute the coverage rate of each regional target. It is where denotes the proportion of polygon area , and denotes the proportion of regional target . In total, the priority summation of regional targets can be computed as follows:
(1)Uniqueness constraint on spot targets; that is, each spot target can only be observed once: (2)Conversion time constraints between composite tasks, it is Each composite task corresponds to an observation activity of the satellite, and every two composite tasks must satisfy the corresponding conversion time constraint. The conversion time include the rotation time and settling time of the sway.(3)Storage constraints of the satellite: (4)Energy constraints of the satellite:The satellites usually consume some energy during the imaging and swaying process, so the corresponding energy consumption can be represented in a function about the observation time of satellites and sensor’s swaying. The energy consumption should not exceed the largest energy limitation.(5)Constraint on the maximum number of side-view imaging in a single circle. We can obtain the identifier of orbit circles according to the starting time and ending time of observation activities and then verify this constraint. Suppose the orbit set of satellite is in which is the number of orbits, and refers to orbit of satellite . Given that contains composite tasks ,
3. The Proposed Approach
In the existing intelligent optimization methods, they have not involved the effective management (i.e., learning, storage, and utilization) to domain knowledge of practical problems, and thus they cannot effectively obtain the optimal solution of optimization problems. Based on our knowledge, the complex optimization problem can be solved through learning some related knowledge from the optimization process and then employing it to guide the subsequent optimization process. In this paper, we mainly focus on the explicit knowledge, namely, the knowledge that can be definitely expressed. All the explicit knowledge can be expressed and stored through computer language and updated and adopted by other models.
In this paper, the knowledge-based simulated annealing algorithm is defined as a hybrid approach which effectively combines the simulated annealing model and knowledge model, in which the simulated annealing model searches the solution space of optimization problems by “neighborhood search” strategy, and the knowledge model learns useful knowledge from the optimization process and then uses it to guide the subsequent optimization process. The knowledge-based simulated annealing algorithm employs the modeling idea of integrating the knowledge model and the simulated annealing model. Its basic framework is shown in Figure 1.
The working mechanism of knowledge-based simulated annealing algorithm is shown in Figure 2. The top illustrates the optimization process of simulated annealing algorithm: search the feasible space of optimization problems through the “neighborhood search” strategy and then converge to the near-optimal solution through continually iterations. And the bottom shows the effect of knowledge model: learn the useful knowledge from the early optimization process and then apply it to guide the subsequent optimization process.
During the early operation process of knowledge-based simulated annealing algorithm, due to that the available samples are not sufficient, so the reliability of learned knowledge is not high, and the guidance to optimization process is not distinct. However, with the advance of iterations, the reliability of learned knowledge is higher than before, and the guidance to optimization process is more distinct. Compared with traditional simulated annealing algorithm, with the help of knowledge model, the knowledge-based simulated annealing algorithm can either converge to a satisfactory solution more quickly or converge to a satisfactory solution with higher quality.
3.1. Knowledge Definition, Learning, and Application
(1) Operator Knowledge. Usually, different operators have different application scopes. When solving practical problems, a single operator may be more effective to some problems than to others. Therefore, it is very difficult to find out a general operator that can solve all kinds of practical problems effectively. In order to improve the performance of simulated annealing algorithm, we can adopt several kinds of operators for different operations in simulated annealing algorithm, with the hope of trying to mine some operators that can effectively solve the present problem.
When an operator is employed to one operation, suppose the original individual set is and the newly generated individual set is , and if the optimal individual in is better than that in , then the present operation is considered to be successful. The optimization performance of one operator can be defined as the number of successful operations realized by the given operator. Operator knowledge mainly refers to the accumulated knowledge for the optimization performance of various operators.
The simple case for extraction and application of operator knowledge is displayed as follows. Suppose there are three operators in the simulated annealing algorithm, if the current operation implemented by the operator is successful, then the optimization performance of the operator will be added by one. During the next operation, the selection probability for each operator will be calculated according to the following formula: where denotes the probability that the operator is selected for the operation.
(2) Parameter Knowledge. One simple and easy method to improve the optimization performance of simulated annealing algorithm is to adjust the parameter values dynamically during the optimization process. To lower the sensitivity of parameters to experimental results, we employ several different parameter combinations to develop the evolution process and decide the parameter combination selected for the next iteration according to their optimization performance. After a single iteration, if the global optimal solution has been improved, then it is a successful iteration. In the process of solving current cases, the number of successful iterations obtained by the given parameter combination is regarded as its optimization performance. Parameter knowledge mainly refers to the accumulated knowledge of the optimization performance for various parameter combinations.
During the initialization stage, the orthogonal method can be adopted to generate different parameter combinations and all their optimization performance is initialized to 1. Before any iteration of simulated annealing algorithm, a parameter combination will be selected stochastically from several parameter combinations through roulette selection (based on the optimization performance of parameter combinations) as the parameter of current iteration. If the global optimal solution has been improved in this iteration, then the optimization performance value of the parameter combination used in this iteration will be increased.
3.2. The Available Neighborhood Structures
The following neighborhood structures have been designed based on the basic ones and passed the constraint checking of the basic neighborhood structure, thus greatly simplifying the optimization calculation process.
(1) Arrange. The scheduling of imaging satellites for composing tasks usually involves the task arrangement which is performed through composing or inserting the contained atomic tasks. Neighborhood composition is preferred in the arrangement of atomic tasks, and the insertion operation is adopted only when atomic tasks cannot be composed or the task has not been finished, which contributes to the observation composition of much more tasks and helps to arrange more tasks.
(2) Relocate. This kind of neighborhood can reallocate the satellite resource and time windows of tasks. There are several time windows about satellites and tasks, so we can reallocate time windows for the task as to the same satellite. Two kinds of operations are involved here: one is to reallocate the task to different time windows about the same satellite and the other is to reallocate the task to another satellite. The relocating aims to increase the chance of arranging other tasks through adjusting the satellite resource and time windows of tasks.
(3) Swap. This neighborhood aims to exchange two tasks. About parallel scheduling problems, three kinds of neighborhood structures for task exchange are mentioned: the exchange between two tasks with the same resource (exchanging the execution order); the exchange between two tasks with different resources; the exchange between the scheduled tasks and unscheduled ones. To the multiple satellites mission planning problem, the relevant experiments indicate that the first two exchanges are unsuccessful. Therefore, this paper adopts the third kind of neighborhood structure for task exchange, namely, to exchange tasks under the real satellite resource and the virtual one.
(4) Sample. Random sampling neighborhood is a special measure adopted to handle large-scale neighborhood effectively. It is not a new neighborhood structure itself, but a method to generate new neighbourhood structures through sampling the existing ones randomly. To be specific, before each move, random sampling neighbourhood will sample all neighbors generated by the given neighbourhood structure, so as to limit the move to part of the sampled neighbourhood, and the proportion of sampling is usually specified in advance upon requirements.
Among the designs of neighbourhood structure, the arrangement of neighbourhood is designed to implement the insertion and composition operations among atomic tasks in which the task composition is preferred, and it is a kind of improved neighbourhood; the neighbourhood relocation and swap disturb the solutions, respectively, through adjusting the resources and time windows of the task as well as exchanging the scheduled tasks with unscheduled ones, and they belong to a regulative neighbourhood which can increase the chance to arrange other tasks after the relocation and swap of tasks, so they should be tried on all the unfinished tasks.
4. Experimental Results
Six sets of even-distributed targets are generated, respectively, with the number of 100, 200, 300, 400, 500, and 600. Based on a different number of satellites, several test cases are then constructed for each set of targets. Each case will be calculated for fifty times and the comparison will be made on the averages of the results to avoid the influence of randomness. To validate the performance of this algorithm, the Heuristic Method (HM) and the Standard Simulated Annealing Algorithm (SSAA) are employed to be compared with the Knowledge-Based Simulated Annealing Algorithm (KSAA).
4.1. Experimental Results of Small Scale Testing Examples
Six small scale testing examples are applied to test the performance of different algorithms. Experimental results of small scale testing examples are displayed in Figures 3 and 4. Experimental results in Figure 3 suggest that there is no remarkable difference in optimization performance among HM, SSAA, and KSSA. Experimental results in Figure 4 suggest that the computational time of SSAA and KSSA is little larger than that of HM. Also, there is no distinct difference in computational time between SSAA and KSSA. Since SSAA and KSSA are all intelligent optimization approaches, so the computational time of SSAA and KSSA is larger than that of HM.
4.2. Experimental Results of Medium Scale Testing Examples
Six medium scale testing examples are applied to test the performance of different algorithms. Experimental results of medium scale testing examples are displayed in Figures 5 and 6. Experimental results in Figure 5 suggest that there is no significant difference in optimization performance among SSAA and KSSA, but the optimization performance of SSAA and KSSA is better than that of HM. Since SSAA and KSSA are all intelligent optimization approaches, so the optimization performance of SSAA and KSSA should be better than that of HM, and the experimental result indicates this viewpoint. Experimental results in Figure 6 suggest that the computational time of SSAA and KSSA is little larger than that of HM. Also, there is no distinct difference in computational time between SSAA and KSSA.
4.3. Experimental Results of Large Scale Testing Examples
Six large scale testing examples are applied to test the performance of different algorithms. The experimental results of large scale testing examples are displayed in Figures 7 and 8. Experimental results in Figure 7 suggest that the optimization performance of SSAA is better than that of HM, and the optimization performance of KSAA is better than that of SSAA. Since SSAA is an intelligent optimization approach, so the optimization performance of SSAA should be better than that of HM, and experimental results indicate this viewpoint. At the same time, KSAA extended the learning and guidance function based on the SSAA, so its performance should be better than that of SSAA, and experimental results validate this opinion. Experimental results in Figure 8 suggest that the computational time of SSAA is larger than that of HM, and the computational time of KSAA is larger than that of SSAA. Since SSAA is an intelligent optimization approach, so it needs more computation cost than that of HM to deal with large scale testing examples. Also, the number of modules in KSSA is more than that in SSAA, so it needs more computation cost than that of SSAA to deal with large scale testing examples.
The innovation of this paper lies in that it has established a multisatellite mission planning model based on the constraint satisfaction and proposed a knowledge-based simulated annealing algorithm to solve multisatellite mission planning problems, which has been proved effective in the relevant experiment.
The future research directions can be summarized as follows. (1) Extend the types of knowledge. Extracting the experiential knowledge of experts to multiple satellites mission planning problems, we could employ this knowledge to guide the optimization process effectively so as to enhance optimization efficiency as much as possible. (2) Adopt new knowledge mining techniques. We should consider employing advanced knowledge mining through machine learning or data mining to obtain useful knowledge from the optimization process.
Conflict of Interests
The authors declare no conflict of interests.
The authors would like to thank the reviewers for the valuable comments and constructive suggestions. they also would like to thank the editor-in-chief, associate editor and editors for helpful suggestions. This work is supported by the National Natural Science Foundation of China (nos. 71101150, 71071156, and 71331008), Program for New Century Excellent Talents in University, the Ministry of Education in China (MOE) Project of Humanities and Social Sciences (No. 12YJC630078).
- V. Gerard and L. Michel, “Planning activities for Earth watching and observing satellites and constellations,” in Proceedings of the 16th International Conference on Automated Planning and Scheduling (ICAPS '06), Cumbria, UK, 2006.
- F. N. Kucinskis and M. G. V. Ferreira, “Planning on-board satellites for the goal-based operations for space missions,” IEEE Latin America Transactions, vol. 11, no. 4, pp. 1110–1120, 2013.
- A. Manzak and H. Goksu, “Application of Very Fast Simulated Reannealing (VFSR) to low power design,” in Proceedings of the 5th International Workshop on Embedded Computer Systems: Architectures, Modeling, and Simulation (SAMOS '05), pp. 308–313, Springer, Berlin, Germany, July 2005.
- Z. Y. Lian, Y. Wang, Y. J. Tan et al., “Integration solving framework for global navigation satellite system planning and scheduling problem,” Disaster Advanced, vol. 6, no. 1, pp. 330–336, 2013.
- S. Rojanasoonthon and J. Bard, “A GRASP for parallel machine scheduling with time windows,” INFORMS Journal on Computing, vol. 17, no. 1, pp. 32–51, 2005.
- R. J. He and L. N. Xing, “A learnable ant colony optimization to the mission planning of multiple satellites,” Research Journal of Chemistry and Environment, vol. 16, no. S2, pp. 18–26, 2012.
- B. Suman and P. Kumar, “A survey of simulated annealing as a tool for single and multiobjective optimization,” Journal of the Operational Research Society, vol. 57, no. 10, pp. 1143–1160, 2006.
- A. Globus, J. Crawford, J. Lohn, and A. Pryor, “A comparison of techniques for scheduling earth observing satellites,” in Proceedings of the 19th National Conference on Artificial Intelligence (AAAI-2004): Sixteenth Innovative Applications of Artificial Intelligence Conference (IAAI '04), pp. 836–843, July 2004.
- D. A. Knapp, D. S. Roffman, and W. J. Cooper, “Growth of a pharmacy school through planning, cooperation, and establishment of a satellite campus,” American Journal of Pharmaceutical Education, vol. 73, no. 6, article 102, 2009.
- B. Sudarshan and D. R. Nikil, “Very fast simulated annealing for HW-SW partitioning,” Tech. Rep. CECS-TR-04-17. UC, Irvine, 2004.
- C. Wang, J. Li, N. Jing, J. Wang, and H. Chen, “A distributed cooperative dynamic task planning algorithm for multiple satellites based on multi-agent hybrid learning,” Chinese Journal of Aeronautics, vol. 24, no. 4, pp. 493–505, 2011.
Copyright © 2013 Da-Wei Jin and Li-Ning Xing. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.