Table of Contents Author Guidelines Submit a Manuscript
Complexity
Volume 2018 (2018), Article ID 2959030, 10 pages
https://doi.org/10.1155/2018/2959030
Research Article

Precision Security: Integrating Video Surveillance with Surrounding Environment Changes

1CAS Research Center for Ecology and Environment of Central Asia, Xinjiang Institute of Ecology and Geography, Chinese Academy of Sciences, Urumqi 830011, China
2School of Management Engineering and Business, Hebei University of Engineering, Handan 056038, China
3Center for Geo-Spatial Information, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
4School of Information Science and Engineering, Xiamen University, Xiamen 361005, China
5Key Laboratory of IOT Terminal Pivotal Technology, Harbin Institute of Technology, Shenzhen 518000, China

Correspondence should be addressed to Guiwei Zhang

Received 22 July 2017; Revised 19 October 2017; Accepted 14 December 2017; Published 8 February 2018

Academic Editor: Roberto Natella

Copyright © 2018 Wenfeng Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Video surveillance plays a vital role in maintaining the social security although, until now, large uncertainty still exists in danger understanding and recognition, which can be partly attributed to intractable environment changes in the backgrounds. This article presents a brain-inspired computing of attention value of surrounding environment changes (EC) with a processes-based cognition model by introducing a ratio value of EC-implications within considered periods. Theoretical models for computation of warning level of EC-implications to the universal video recognition efficiency (quantified as time cost of implication-ratio variations from to , ) are further established. Imbedding proposed models into the online algorithms is suggested as a future research priority towards precision security for critical applications and, furthermore, schemes for a practical implementation of such integration are also preliminarily discussed.

1. Introduction

Surveillance plays a vital role in maintaining social security and protecting infrastructure facilities of a country [1, 2]. But until now, there are still considerable uncertainties associated with danger understanding and recognition, especially for engineering-critical applications [35], which can be partly attributed to implications of environment conditions to video recognition efficiency of the surveillance system. It has been demonstrated that suitable model parameters in online algorithms and difficulty level of object detection tasks in different environments can be much different [6].

Surrounding environment changes as particular changes in backgrounds are also responsible for some significant but still unresolved issues in objects recognition and tracking [7]. Because the backgrounds cannot be well-characterized in uncontrolled environments changes, the surveillance video recognition becomes more intractable [8]. Recognition of objects, accidents, and behaviors in dynamic environments is still a great challenge in video surveillance [9], which should be carried out through objects detection, motion tracking and analyses, and understanding and recognition of other details with robust and efficient algorithms. Environments changes are so rich and varied that an online algorithm with universal significance is demanded towards the effective dangers detection and warning in dynamic environment changes [1017].

Numerous algorithms have been developed to tackle video recognition challenges in various environments; however, a full understanding of environmental implications to video recognition efficiency demands learning models with universal significance (ignoring uncontrolled differences in real scenarios) [1827]. That is the essential reason why the current online algorithms, even for latest algorithms, for example, the latest models for tackling crowd segmentation for the high-dimensional, large-scale anomaly detection, still encounter considerable uncertainties [23, 24]. How to evaluate and compute the regulated attention in implications of the surrounding environment changes and, furthermore, how to define the warning level of EC-implications to video recognition efficiency should be research priorities towards precision security in intelligent surveillance [2127].

It has been widely recognized that video surveillance should consider the implications of surrounding environments changes to video surveillance, but until now, there are still no models for a universal evaluation of EC-implications to video recognition efficiency [4, 1227]. To solve the unresolved issues associated with uncontrolled EC-implications, various novel optimization models were proposed and further applied in current learning systems [1315]. Robustness and efficiency of some online algorithms in tackling special EC-implications in special scenario were validated in a series of previous studies although, until now, universal models for computation of the attention value and warning level of EC-implications to video recognition efficiency remain unaddressed and, hence, an emergent issue is improving the current surveillance systems [16, 17].

Objectives in this study are to present a brain-inspired computing of warning level of the implications of surrounding environment changes to video recognition efficiency, to model brain cognition processes and establish theoretical models for precision computation of attention value of EC, and to highlight necessity of introducing proposed models in critical applications.

2. Preliminary Formulation

A conceptual framework of precision security to integrate video surveillance with EC is shown in Figure 1. Danger detection in EC-implications is of great complexity because of features diversity. Precision security aims to present a better understanding of EC-implications to danger detection efficiency in sensitive areas and allows us to consider not only “who are dangerous” but also “who are in danger” and to reduce uncertainties in uncontrolled and complicated real scenarios [2831].

Figure 1: A conceptual framework of precision security with four real scenarios as examples, (a) smog (captured by an android camera), (b) sandstorm (captured by a mobile phone), (c) blizzard (with videos collected from a drone), and (d) truck exhaust (with videos collected by a driving recorder).

Brain cognition of EC-implications can be approached in four processes, data acquirement, classification, computation, and inference. Throughout the paper, the original, classified, computed, and inferred data, respectively, are denoted by EC1, EC2, EC3, and EC4. Obviously, generates ECi+1. To reduce uncertainty, assume that only EC3 and EC4 contribute to dispelling the EC-implications and generate regulated attention-effective data (denoted by ), which is generated from determination by EC4 (with a contribution ) and a part of EC3 (with a contribution ).

Denote by amounts of newly generated effective data in the -th brain learning period, = 1, 2, 3, . Denote as the amounts of at the -th frame and let , be at the beginning and end of the -th period, respectively, = 1, 2, 3, 4, = 1, 2, 3, . Assume that the average efficiency of data exploitation is and employ a function to estimate EC1 loss. Let be degree of importance and be the ECi contributions to , ; it is clear that . During the -th learning period (with length ), define the theoretical quantification of attention value of EC-implications as the amounts of and definewhere can be interpreted as EC attention-time ratio in the -th learning period, .

Based on the performance of a rapid DL (deep learning) method, YOLO, which is one of the most efficient algorithms for objects detection, classification, and tracking [3236], such implications of EC to video surveillance and the attention value and warning level are displayed in Figure 2.

Figure 2: Unneglectable surrounding environment changes (EC) with various EC-implications in video recognition, objects vague (a1 and b1), occlusion (a2 and b2), or dummy (c1 and c2).

Obviously, attention-time ratio of EC is reduced in regulated attention and EC-warning level (denoted by α) is measured by corresponding time cost. Throughout the paper, computation of α is formulated as evaluation of time cost in implication-ratio changes from to , = 1, 2, ….

3. Theoretical Analyses

Nonlinear functional analyses were confirmed suitable for solving the real scenario analyses and, exactly, multistage approach has been widely employed in simulating disaster responses [3742]. But dangers of understanding and recognition in precision security are worthy of reconsideration to dispel EC-implications, utilizing determined EC-attention value and warning level for such implications. Recall that brain cognition of EC-implications can be theoretically approached in four processes and hence, correspondingly, the formulated problem should be resolved in a four-stage approach [4046].

3.1. Attention Value of EC

Brain-inspired approach to attention value and warning level of EC are shown in Figure 3, where the EC-implications are manifested as an evolution of attention value and warning level. Such approach is independent of EC-types and hence it has universal significance. Regulated attention in brain-inspired data mining approach for behavior, accidents, and emotion understanding can be carried out through the whole video sampling, training, and recognition processes [47, 48].

Figure 3: Brain-inspired evolution of attention value and warning level of environment changes, the attention values of EC-implications and behaviors/accidents are represented by the circles size and text color, respectively, while the warning level of EC-implications is represented by the arrows color.

First, we havewhich imply        Suppose that EC3 can fully convert to EC4; we obtain

Let , . From (2)–(4) and preliminary formulation, we have Therefore, theoretical quantification of (i.e., the attention value of EC-implications) is

3.2. Determined Warning Level

It remains to determine warning level of EC-implications. To reduce time complexity of learning periods for the EC-universal significance, analyses can be divided into two cases: time cost in different learning periods is independent or considering periods are mutually dependent.

Within a single learning period, if EC evolution rate is fixed (denoted by ), then we have

Let ; we have

Taking into account the variation of within this period, for example, let ; we have and hence

For a video with learning periods, let ; we have

The solution of (11) is

Equivalently, we have

To simplify the representation of (13), define the following:

The Time-Parameters Matrices

The Original Status Matrix

The Dynamic Functions MatrixWe obtain the matrix form of (13):

Further considering relationship between surveillance videos, let ; then

The symmetric form of (17) is

Defining , , , we obtainwhere is the correlative function of the -th video in the consider security system, .

Finally, EC-warning level can be computed as time cost from to , . Regulated attention can be theoretically implemented in multidata fusion, learning, and modelling. Region of interest (ROI) or pedestrians of interest (POI) corresponds to GIS-data, including time, place, and EC through Internet of things applicable for real scenarios, as seen in Figure 4. It is worth noting that the 3D stereo generated from a 2D video sequence is advantageous to highlight EC evolution and therefore is also advantageous to determine length of learning periods.

Figure 4: Implementation of the brain mechanisms of regulated attention and the corresponding determination of attention value and warning level, where a simple algorithm [28] is employed to generate a 3D stereo from a 2D video sequence and highlight evolution of environment changes.

4. Simulation and Discussion

Our proposed models in the present study are learning models with universal significance (ignoring uncontrolled differences in real scenarios), which aim to establish theoretical framework of the environmental implications to video recognition efficiency. It will serve for a universal evaluation of EC-implications to video recognition efficiency. Numerous algorithms have been developed to tackle video recognition challenges in various environments, but it is still difficult to describe the time complexity of learning periods. This can be largely attributed to the complexity of video recognition issues. Even for a given issue, it is not easy to determine learning periods for different EC-scenarios. Generally, attention-time ratio of EC is reduced in regulated attention and EC-warning level can be measured by corresponding time cost in reducing the attention-time ratio of EC. So we formulate the parameter α as the time cost in implication-ratio changes from to , . For detailed analysis on the time complexity, some examples of learning periods for video detection and tracking in different surveillance scenarios are presented in Figure 5. One possible solution to treat the time complexity is to imbed proposed models into online algorithms in critical applications, utilizing these newly added examples and evidences.

Figure 5: Examples of learning periods for video recognition in different surveillance scenarios for detailed analysis on the time complexity, under implications of smog (in the blue-rectangles, captured by an android camera), sandstorm (in the green rectangles, captured by a webcam on mobile), blizzard (in those yellow rectangles with videos collected from a drone), and truck exhaust (in the red rectangles with videos collected by a driving recorder).

Because of time complexity of learning periods, we give EC-attention values for simulation, ten videos with given EC-attention values in Table 1. Equations (17)–(20) are employed to simulate brain-inspired computing of corresponding EC-warning level.

Table 1: Attention values of ten videos with unneglectable EC.

Ignoring the association among ten surveillance videos, from (17) and (18), the EC-warning levels from to are = 0.8868, = 0.1363, = 1.5691, and = 0.9220, respectively, . Taking into account the association among ten surveillance videos, utilizing (19) and (20) and letting , and finding a suitable association function (here ), EC-warning levels from to are = 0.4096, = 0.0984, = 0.6314, and = 0.9220, respectively, .

Characterizing EC-warning level and the implied dangers is helpful for learning how well can potential dangers be detected by video surveillance in changing environments, especially in unmanned driving, where one major bottleneck is finding effective and efficient algorithms for the danger detection and caution, majorly due to lack of adaptive attention in utilized learning systems [4951]. Numerous issues remain unresolved, a part of which are resulted from poorly understood EC-implications [5258]. Brain-inspired modelling approach to such implications in the present study majorly depends on amounts of attention data and length of attention time, ignoring the differences in real scenarios. Therefore, the proposed models have universal significance for its critical applications. It is therefore necessary to consider integration of proposed models with the online surveillance algorithms towards precision security [5961]. Such precision security can be a great challenge because that performance degradation of video recognition efficiency in critical environments has been demonstrated in some previous studies [6, 17, 21, 35].

For special scenarios when EC-implications are not significant, integration of our models with online algorithms is not necessary. Computation can be largely simplified in special applications. Taking the lane detection as an example, the biological principles are to detect and recognize a line, which can work well even if the lanes are partly missing [6264], as seen in Figure 6.

Figure 6: An example does not normally require integration with proposed models; truck exhaust as a regional environmental change has no significant implications to efficiency of the lanes detection and warning of conflict danger. Lanes detection is always working well although a part of detected lane is temporally missing (highlighted by the red caution-texts “Right Departure”) within the period of right departures and a robust and efficient warning of conflict danger (highlighted by the red caution-texts “Conflict Alert”) works well simultaneously during the period (Frame #156 to #164).

For complex applications, however, imbedding proposed models in current security systems becomes necessary, such as compressive sensing for sparse tracking [18] (it can be improved as locally compressive sensing within ROI), VIBE algorithm for real-time object detection from a moving camera [19], Adaboost algorithm for noise-detection in ROI [20], optical flow for robots’ recognition of environments [21], SVM clustering for accidents classification [22], deep learning algorithms for anomaly detection, crow analysis, and hierarchical tracking within ROI [2327]. Objects understanding and detection in dynamic environment changes are usually based on the adaptive background subtraction and other objects recognition methods [17, 21, 35, 6568]. A preliminary scheme for the practical integration of proposed models with these algorithms is presented in Figure 7, where smog as a global environmental change has significant implications to video behaviors recognition and loitering detection within a hovering period of two persons; only half of hovering behaviors is detected; only one person is red-highlighted and the other person is always in a green rectangle, indicating the degradation of video surveillance efficiency within the considered periods under any real challenging scenarios. It is worth noting that the proposed models have analytic solutions and the time cost in each iteration is much shorter than the time cost of any video recognition algorithms. Therefore, imbedding the proposed models in current security systems for critical applications is not only necessary but also feasible; proposed models can work well with any online algorithms without a great loss in surveillance efficiency.

Figure 7: An example demands integration of proposed models with other online algorithms; smog as a global environmental change has significant implications to loitering detection within a hovering period of two persons, where only half warning of hovering behaviors is detected (highlighted by the red rectangles). A preliminary scheme for practical integration is subsequently presented.

5. Conclusion

Despite previous studies on algorithms for video surveillance in various environments, there are still considerable uncertainties in objects detection, classification, and tracking. Understanding and recognition of implications of surrounding environment changes to surveillance efficiency are still very limited. Brain-inspired modelling approach to such implications in the present study majorly depends on the amounts of attention data and attention time, ignoring difference in real scenarios. Therefore, proposed models represent biological principles of computational intelligence and have universal significance for its practical integration with online algorithms. Nevertheless, a full understanding of complexity of learning periods for different EC-scenarios is still necessary. This is also a next research priority towards a universal evaluation of implications of the surrounding environments changes to video recognition efficiency.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was financially supported by the Shenzhen Basic Research Project (JCYJ201506 30114942260), the National Natural Science Foundation of China (41571299), the CAS “Light of West China” Program (XBBS-2014-16), and the “Thousand Talents” plan (Y474161).

References

  1. A. Bekhouch, I. Bouchrika, and N. Doghmane, “Improving view random access via increasing hierarchical levels for multi-view video coding,” IEEE Transactions on Consumer Electronics, vol. 62, no. 4, pp. 437–445, 2016. View at Publisher · View at Google Scholar · View at Scopus
  2. R. Bhatt and R. Datta, “A two-tier strategy for priority based critical event surveillance with wireless multimedia sensors,” Wireless Networks, vol. 22, no. 1, pp. 267–284, 2016. View at Publisher · View at Google Scholar · View at Scopus
  3. J. Rajeshwari, K. Karibasappa, and M. T. Gopalkrishna, “Adaboost modular tensor locality preservative projection: Face detection in video using Adaboost modular-based tensor locality preservative projections,” IET Computer Vision, vol. 10, no. 7, pp. 670–678, 2016. View at Publisher · View at Google Scholar · View at Scopus
  4. Y. Zhang, Q.-Z. Li, and F.-N. Zang, “Ship detection for visual maritime surveillance from non-stationary platforms,” Ocean Engineering, vol. 141, pp. 53–63, 2017. View at Publisher · View at Google Scholar · View at Scopus
  5. A. Abrardo, M. Martalò, and G. Ferrari, “Information fusion for efficient target detection in large-scale surveillance Wireless Sensor Networks,” Information Fusion, vol. 38, pp. 55–64, 2017. View at Publisher · View at Google Scholar · View at Scopus
  6. S. Murayama and M. Haseyama, “A note on traffic flow measurement for traffic surveillance video: Reduction of performance degradation in various environments,” Infectious Disease Clinics of North America, vol. 23, no. 2, pp. 209–214, 2009. View at Google Scholar
  7. A. E. Maadi and X. Maldague, “Outdoor infrared video surveillance: A novel dynamic technique for the subtraction of a changing background of IR images,” Infrared Physics & Technology, vol. 49, no. 3, pp. 261–265, 2007. View at Google Scholar
  8. K. Srinivasan, K. Porkumaran, and G. Sainarayanan, “Background subtraction techniques for human body segmentation in indoor video surveillance,” Journal of Scientific & Industrial Research, vol. 73, no. 5, pp. 342–345, 2014. View at Google Scholar
  9. H. Sun and T. Tan, “Spatio-temporal segmentation for video surveillance,” IEEE Electronics Letters, vol. 37, no. 1, pp. 20-21, 2001. View at Publisher · View at Google Scholar · View at Scopus
  10. M. A. A. Dewan, M. J. Hossain, and O. Chae, “Background independent moving object segmentation for video surveillance,” IEICE Transactions on Communications, vol. E92-B, no. 2, pp. 585–598, 2009. View at Publisher · View at Google Scholar · View at Scopus
  11. A. N. Taeki and M. H. Kim, “Context-aware video surveillance system,” Journal of Electrical Engineering & Technology, vol. 7, no. 1, pp. 115–123, 2012. View at Google Scholar
  12. A. Milosavljević, A. Dimitrijević, and D. Rančić, “GIS-augmented video surveillance,” International Journal of Geographical Information Science, vol. 24, no. 9, pp. 1415–1433, 2010. View at Publisher · View at Google Scholar · View at Scopus
  13. J. S. Kim, D. H. Yeom, and Y. H. Joo, “Fast and robust algorithm of tracking multiple moving objects for intelligent video surveillance systems,” IEEE Transactions on Consumer Electronics, vol. 57, no. 3, pp. 1165–1170, 2011. View at Publisher · View at Google Scholar · View at Scopus
  14. Z. Zhang, M. Wang, and X. Geng, “Crowd counting in public video surveillance by label distribution learning,” Neurocomputing, vol. 166, pp. 151–163, 2015. View at Publisher · View at Google Scholar · View at Scopus
  15. H. Yoon, Y. Jung, and S. Lee, “An Image Sequence Transmission Method in Wireless Video Surveillance Systems,” Wireless Personal Communications, vol. 82, no. 3, pp. 1225–1238, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. M. K. Lim, S. Tang, and C. S. Chan, “ISurveillance: Intelligent framework for multiple events detection in surveillance videos,” Expert Systems with Applications, vol. 41, no. 10, pp. 4704–4715, 2014. View at Publisher · View at Google Scholar · View at Scopus
  17. K. A. Niranjil and C. Sureshkumar, “Background subtraction in dynamic environment based on Modified Adaptive GMM with TTD for moving object detection,” Journal of Electrical Engineering & Technology, vol. 10, no. 1, pp. 372–378, 2015. View at Google Scholar
  18. Q. Yan and L. Li, “Kernel sparse tracking with compressive sensing,” IET Computer Vision, vol. 8, no. 4, pp. 305–315, 2014. View at Publisher · View at Google Scholar · View at Scopus
  19. T. Kryjak, M. Komorkiewicz, and M. Gorgon, “Real-time implementation of foreground object detection from a moving camera using the vibe algorithm,” Computer Science & Information Systems, vol. 11, no. 4, pp. 1617–1637, 2014. View at Google Scholar
  20. J. Cao, S. Kwong, and R. Wang, “A noise-detection based AdaBoost algorithm for mislabeled data,” Pattern Recognition, vol. 45, no. 12, pp. 4451–4465, 2012. View at Publisher · View at Google Scholar · View at Scopus
  21. M. Kimura and M. Shibata, “Environment recognition using optical flow in an autonomous mobile robot,” Parkinsonism & Related Disorders, vol. 14, no. 8, pp. S63–S64, 2008. View at Google Scholar
  22. A. Temko and C. Nadeu, “Classification of acoustic events using SVM-based clustering schemes,” Pattern Recognition, vol. 39, no. 4, pp. 682–694, 2006. View at Publisher · View at Google Scholar · View at Scopus
  23. S. M. Erfani, S. Rajasegarar, S. Karunasekera, and C. Leckie, “High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning,” Pattern Recognition, vol. 58, pp. 121–134, 2016. View at Publisher · View at Google Scholar
  24. K. Kang and X. Wang, “Fully convolutional neural networks for crowd segmentation,” Computer Science, vol. 49, no. 1, pp. 25–30, 2014. View at Google Scholar
  25. M. Xu, J. Lei, and Y. Shen, “Hierarchical tracking with deep learning,” Journal of Computational Information Systems, vol. 10, no. 15, pp. 6331–6338, 2014. View at Google Scholar
  26. J. Hu, J. Lu, and Y. P. Tan, “Deep metric learning for visual tracking,” IEEE Transactions on Circuits & Systems for Video Technology, vol. 26, no. 11, pp. 2056–2068, 2016. View at Google Scholar
  27. J. Kuen, K. M. Lim, and C. P. Lee, “Self-taught learning of a deep invariant representation for visual tracking via temporal slowness principle,” Pattern Recognition, vol. 48, no. 10, pp. 2964–2982, 2015. View at Publisher · View at Google Scholar · View at Scopus
  28. B.-G. Kim, “Fast coding unit (CU) determination algorithm for high-efficiency video coding (HEVC) in smart surveillance application,” The Journal of Supercomputing, vol. 73, no. 3, pp. 1063–1084, 2017. View at Publisher · View at Google Scholar · View at Scopus
  29. R. Steen, “A portable digital video surveillance system to monitor prey deliveries at raptor nests,” Journal of Raptor Research, vol. 43, no. 1, pp. 69–74, 2009. View at Publisher · View at Google Scholar · View at Scopus
  30. L. Chen, D. Zhu, J. Tian, and J. Liu, “Dust particle detection in traffic surveillance video using motion singularity analysis,” Digital Signal Processing, vol. 58, pp. 127–133, 2016. View at Publisher · View at Google Scholar · View at Scopus
  31. M. De-La-Torre, E. Granger, R. Sabourin, and D. O. Gorodnichy, “Adaptive skew-sensitive ensembles for face recognition in video surveillance,” Pattern Recognition, vol. 48, no. 11, pp. 3385–3406, 2015. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Afaq Ali Shah, M. Bennamoun, and F. Boussaid, “Iterative deep learning for image set based face and object recognition,” Neurocomputing, vol. 174, pp. 866–874, 2016. View at Publisher · View at Google Scholar · View at Scopus
  33. I. Lenz, H. Lee, and A. Saxena, “Deep learning for detecting robotic grasps,” International Journal of Robotics Research, vol. 34, no. 4-5, pp. 705–724, 2015. View at Publisher · View at Google Scholar · View at Scopus
  34. B. Kamsu-Foguem and D. Noyes, “Graph-based reasoning in collaborative knowledge management for industrial maintenance,” Computers in Industry, vol. 64, no. 8, pp. 998–1013, 2013. View at Publisher · View at Google Scholar · View at Scopus
  35. A. Ess, K. Schindler, B. Leibe, and L. Van Gool, “Object detection and tracking for autonomous navigation in dynamic environments,” International Journal of Robotics Research, vol. 29, no. 14, pp. 1707–1725, 2010. View at Publisher · View at Google Scholar · View at Scopus
  36. J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: unified, real-time object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788, 2015.
  37. J. H. Ruan, X. P. Wang, F. T. S. Chan, and Y. Shi, “Optimizing the intermodal transportation of emergency medical supplies using balanced fuzzy clustering,” International Journal of Production Research, vol. 54, no. 14, pp. 4368–4386, 2016. View at Publisher · View at Google Scholar · View at Scopus
  38. J. Ruan and Y. Shi, “Monitoring and assessing fruit freshness in IOT-based e-commerce delivery using scenario analysis and interval number approaches,” Information Sciences, vol. 373, pp. 557–570, 2015. View at Publisher · View at Google Scholar · View at Scopus
  39. Z. Lu, S. Réhman, M. S. L. Khan, and H. Li, “Anaglyph 3D Stereoscopic Visualization of 2D Video based on Fundamental Matrix,” in Proceedings of the 2013 International Conference on Virtual Reality and Visualization, ICVRV 2013, pp. 305–308, China, September 2013. View at Publisher · View at Google Scholar · View at Scopus
  40. J. H. Ruan, X. P. Wang, and Y. Shi, “A two-stage approach for medical supplies intermodal transportation in large-scale disaster responses,” International Journal of Environmental Research & Public Health, vol. 11, no. 11, pp. 11081–11109, 2014. View at Google Scholar
  41. H. Jiang and J. H. Ruan, “Fuzzy evaluation on network security based on the new algorithm of membership degree transformation—m(1,2,3),” Journal of Networks, vol. 4, no. 5, pp. 324–331, 2009. View at Google Scholar
  42. J. Ruan, P. Shi, C.-C. Lim, and X. Wang, “Relief supplies allocation and optimization by interval and fuzzy number approaches,” Information Sciences, vol. 303, pp. 15–32, 2015. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
  43. W. Otjacques, F. De Laender, and P. Kestemont, “Discerning the causes of a decline in a common European fish, the roach (Rutilus rutilus L.): A modelling approach,” Ecological Modelling, vol. 322, pp. 92–100, 2016. View at Publisher · View at Google Scholar · View at Scopus
  44. C. J. Littles, S. S. Pilyugin, and T. K. Frazer, “A combined inverse method and multivariate approach for exploring population trends of Florida manatees,” Marine Mammal Science, vol. 32, no. 1, pp. 122–140, 2016. View at Publisher · View at Google Scholar · View at Scopus
  45. S. Santoro, A. J. Green, and J. Figuerola, “Immigration enhances fast growth of a newly established source population,” Ecology, vol. 97, no. 4, pp. 1048–1057, 2016. View at Google Scholar
  46. J. D. H. Smith and C. Zhang, “Generalized Lotka stability,” Theoretical Population Biology, vol. 103, pp. 38–43, 2015. View at Publisher · View at Google Scholar · View at Scopus
  47. R. Velik, “A brain-inspired multimodal data mining approach for human activity recognition in elderly homes,” Journal of Ambient Intelligence & Smart Environments, vol. 6, no. 4, pp. 447–468, 2014. View at Google Scholar
  48. J. J. Wong and S. Y. Cho, “A brain-inspired framework for emotion recognition,” Magnetic Resonance Imaging, vol. 32, no. 9, pp. 1139–1155, 2006. View at Google Scholar
  49. N. Ovcharova and F. Gauterin, “Assessment of an adaptive predictive collision warning system based on driver’s attention detection,” Clinical & Experimental Metastasis, vol. 8, no. 2, pp. 215–224, 2012. View at Google Scholar
  50. A. Finn and K. Rogers, “Accuracy requirements for unmanned aerial vehicle-based acoustic atmospheric tomography,” Journal of the Acoustical Society of America, vol. 139, no. 4, pp. 2097–2097, 2016. View at Google Scholar
  51. S. Kim, H. Oh, and A. Tsourdos, “Nonlinear model predictive coordinated standoff tracking of a moving ground vehicle,” Journal of Guidance Control & Dynamics, vol. 36, no. 2, pp. 557–566, 2013. View at Google Scholar
  52. Z. Zheng, Y. Liu, and X. Zhang, “The more obstacle information sharing, the more effective real-time path planning?” Knowledge-Based Systems, vol. 114, pp. 36–46, 2016. View at Publisher · View at Google Scholar · View at Scopus
  53. M. W. Whalen, D. Cofer, and A. Gacek, “Requirements and architectures for secure vehicles,” IEEE Software, vol. 33, no. 4, pp. 22–25, 2016. View at Publisher · View at Google Scholar · View at Scopus
  54. R. Czyba, G. Szafranski, and A. Rys, “Design and control of a single tilt tri-rotor aerial vehicle,” Journal of Intelligent & Robotic Systems, vol. 84, no. 1–4, pp. 53–66, 2016. View at Google Scholar
  55. X. Zhang and H. Duan, “An improved constrained differential evolution algorithm for unmanned aerial vehicle global route planning,” Applied Soft Computing, vol. 26, pp. 270–284, 2015. View at Publisher · View at Google Scholar · View at Scopus
  56. T. Uppal, S. Raha, and S. Srivastava, “Trajectory feasibility evaluation using path prescribed control of unmanned aerial vehicle in differential algebraic equations framework,” The Aeronautical Journal, vol. 121, no. 1240, pp. 770–789, 2017. View at Publisher · View at Google Scholar · View at Scopus
  57. A. V. Savkin and C. Wang, “A framework for safe assisted navigation of semi-autonomous vehicles among moving and steady obstacles,” Robotica, vol. 35, no. 5, pp. 981–1005, 2017. View at Publisher · View at Google Scholar · View at Scopus
  58. Y. T. Tan, M. Chitre, and F. S. Hover, “Cooperative bathymetry-based localization using low-cost autonomous underwater vehicles,” Autonomous Robots, vol. 40, no. 7, pp. 1187–1205, 2016. View at Publisher · View at Google Scholar · View at Scopus
  59. J. L. Crespo, A. Faiña, and R. J. Duro, “An adaptive detection/attention mechanism for real time robot operation,” Neurocomputing, vol. 72, no. 4-6, pp. 850–860, 2009. View at Publisher · View at Google Scholar · View at Scopus
  60. W. Barbara, “Computational intelligence: from natural to artificial systems,” Connection Science, vol. 14, no. 2, pp. 163-164, 2002. View at Google Scholar
  61. E. Bonabeau and C. Meyer, “Computational intelligence. A whole new way to think about business,” Harvard Business Review, vol. 79, no. 5, pp. 106–114, 2001. View at Google Scholar
  62. Y. Wang, D. Shen, and E. K. Teoh, “Lane detection using spline model,” Pattern Recognition Letters, vol. 21, no. 8, pp. 677–689, 2000. View at Publisher · View at Google Scholar · View at Scopus
  63. Z. Kim, “Robust lane detection and tracking in challenging scenarios,” IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 16–26, 2008. View at Publisher · View at Google Scholar · View at Scopus
  64. Q. Li, N. Zheng, and H. Cheng, “Springrobot: A prototype autonomous vehicle and its algorithms for lane detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 4, pp. 300–308, 2004. View at Publisher · View at Google Scholar · View at Scopus
  65. M. Dorigo, M. Birattari, and C. Blum, Ant Colony Optimization and Computational Intelligence, vol. 49, no. 8, Springer, 1995.
  66. S. Garnier, J. Gautrais, and G. Theraulaz, “The biological principles of computational intelligence,” Computational Intelligence, vol. 1, no. 1, pp. 3–31, 2007. View at Google Scholar
  67. H. P. Liu, D. Guo, and F. C. Sun, “Object recognition using tactile measurements: kernel sparse coding methods,” IEEE Transactions on Instrumentation & Measurement, vol. 65, no. 3, pp. 656–665, 2016. View at Google Scholar
  68. H. P. Liu, Y. L. Yu, F. C. Sun, and J. Gu, “Visual-tactile fusion for object recognition,” IEEE Transactions on Automation Science & Engineering, vol. x, no. 99, pp. 1–13, 2017. View at Google Scholar