Security and Communication Networks

Security and Communication Networks / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6612438 | https://doi.org/10.1155/2021/6612438

Dong Zhang, Zhongyi Guo, "Mobile Sentry Robot for Laboratory Safety Inspection Based on Machine Vision and Infrared Thermal Imaging Detection", Security and Communication Networks, vol. 2021, Article ID 6612438, 16 pages, 2021. https://doi.org/10.1155/2021/6612438

Mobile Sentry Robot for Laboratory Safety Inspection Based on Machine Vision and Infrared Thermal Imaging Detection

Academic Editor: Vincenzo Conti
Received23 Oct 2020
Revised24 Mar 2021
Accepted10 Apr 2021
Published28 Apr 2021

Abstract

As is known to all, university chemistry laboratories have always been considered as dangerous areas on campus, especially pharmaceutical chemistry laboratories, which are often the focus of pharmaceutical laboratory management. The innovation in safety management mode can effectively reduce or even eliminate security risks and reduce the risk of safety accidents. With the rapid development of artificial intelligence, robots have been introduced into all walks of life, replacing humans to complete part of the safety inspection. One of the core tasks of laboratory safety inspection is to find the source of fire and take measures to extinguish the fire in time. The intention of this paper is to design a sentry robot, which could take appropriate emergency measures to detect dangerous situations, including sounding alarms and carrying out firefighting, in order to minimize accidents and economic losses, based on machine vision for periodic inspection of laboratories with a temperature thermal imager to scan surroundings at the same time. The preliminary research results show that based on the machine vision, the robot identified the fire source and adjusted the spray device to carry out the fire-extinguishing experiment successfully. The development and utilization of laboratory patrol robots can share the work of laboratory patrol to a certain extent, guarantee the safety of experimental teaching in colleges and universities, and have room for sustainable research.

1. Introduction

In recent years, the education system has introduced the concept of safety development, carried forward the idea of life first and safety first, and achieved positive results in the safety work of university laboratories, and the overall security situation remained stable. However, with the rapid development of laboratory construction in universities and scientific research institutes [1], safety accidents occurred frequently in some university laboratories, and most of them were in the form of fire [2]. For example, in June 2013, a fire broke out in the basement of Peking University and some laboratories were damaged. In February 2015, a fire broke out in a laboratory at Nanjing University of Science and Technology. In December 2015, a fire at Tsinghua University killed a postdoctoral fellow. In March 2016, a fire broke out in the Shanghai Institute of Organic Chemistry, Chinese Academy of Sciences. In February 2019, Nanjing Tech University caught fire [3]. Such accidents not only affect teaching but also cause casualties and property losses, so they need to be paid great attention to.

University laboratories are places where scientific research and teaching experiments are carried out and safety risks are widely distributed, including hazardous chemicals, biology, special equipment, and materials that produce poison and explosives easily, and where relatively high safety risks concentrate. The laboratories called the “distribution center” of dangerous goods bring the likely hidden risks of fire and explosion. They are different from the ordinary chemical plant and chemical warehouse. The laboratories often create new “things”. In short, the reaction they produce is unknown when two chemicals combine. Therefore, universities should pay more attention to the safety of laboratories.

There are a lot of instruments and equipment in the pharmaceutical chemistry laboratory, such as rotary evaporators, water bath pots, circulating water vacuum pumps, electric thermostatic air blast drying boxes, and others. Such instruments are used in a large number of classes as well. In addition, with the aging of the equipment, certain risks and hidden dangers emerge. For example, students are not familiar with the nature of drugs or the experimental operation process, the open flame heating device has been used for a long time, and the waste in the laboratory has not been disposed of in time, which may cause fire [4]. Also, the pharmaceutical chemistry laboratories are often the disaster areas of laboratory safety accidents due to the storage of a large number of flammable and explosive organic drugs and organic solvents [5]. Big fire can be prevented by detecting small fire as quickly as possible. In order to solve this problem, universities need to inspect the pharmacy and chemistry laboratories frequently with large-scale manpower and material resources investment [6].

At present, laboratory safety inspection depends on manpower, which is inconvenient especially when the related personnel are off duty or during holidays, and negligence is inevitable.

With the advent of modern robotics technology, artificial intelligence (AI) technology has advanced by leaps and bounds at a surprising rate, affecting and transforming industries including manufacturing, health care, education, law, and agriculture. The main drivers of the market are from fields such as computer vision devices, self-driving cars, advanced cameras, and image sensors [7]. In recent years, robots have been introduced into the field of safety inspection and have replaced humans to complete most automatic inspection.

The Anson Intelligent ACR explosion-proof detection robot independently developed by China adopts a modular design, with a variety of detection sensors such as gas detectors, laser detectors, and infrared thermal imaging. It can perform concentration monitoring, imaging, and comprehensive analysis for targeted gas and also trigger the alarm system when necessary for timely staff handling. The comprehensive and thorough security monitoring significantly improves security.

The safety work of pharmaceutical chemistry laboratories in colleges and universities involves a large number of groups and has wide influence and is related to the stable development of colleges and universities with pharmaceutical science. Up to now, no artificial intelligence approach has been reported to be used in the fire patrol inspection of any pharmaceutical chemistry laboratory.

The key technology of ensuring fire safety is to judge the flame disaster, and the commonly used means are infrared thermal sensing and flame image recognition. Flames have obvious visual features, which are usually identified according to static features such as color, shape, and texture of the flames. As early as 1996, Professor Simonofer of the University of Florida began to use infrared imaging video information of aircraft engines and booster cabin fire to conduct research on the detection of such special fire [8]. In 2002, Professor Walter from the University of Central Florida conducted a special study on the video image recognition of flames, simplified the flame recognition algorithm according to the color characteristics of flames, and succeeded [9]. Chen et al. combined RGB color segmentation with flame motion characteristics to determine flame pixels [10]. Ko et al. extracted candidate flame regions by detecting moving regions and judging flame color and extracted the features of the candidate regions for training SVM classifiers to realize fire and nonfire determination [11].

As early as 1800, the British physicist F.W. Huxel discovered infrared thermal radiation through a thermometer and a prism. In the study of modern physics, according to Planck’s Blackbody Radiation Law, all objects radiate electromagnetic waves due to different temperatures, among which the electromagnetic waves with a wavelength of 2.0–1000 μm are thermal infrared rays. All the objects in nature whose temperature is higher than absolute zero (−273°C) emit infrared radiations all the time. Such infrared radiations carry the characteristic information of the objects, which is the basis for infrared technology to distinguish the temperature and heat distribution field of various measured targets. The infrared image obtained is also called the thermal image. A thermal image can reflect the temperature field on the surface of an object, and it has been widely used in military, industrial, and automobile driving assistance and other fields.

The infrared thermal imager tests the infrared energy by means of noncontact detection and converts it into electrical signals. After computer processing, it is projected to the display to generate thermal images. The pixel points on the images indicate the temperature values of the corresponding positions and we can calculate the temperature values. As the infrared thermal imager can accurately quantify and measure the heat detected, it can not only avoid visible light interference to observe the image of thermal field distribution but also accurately identify and strictly analyze the fault area of heating [12]. In addition, due to the strong penetration ability of infrared rays, it can still accurately image in the environment with a certain amount of smoke, which is of great significance for automatic identification of fire hazards.

Designing a robot for laboratory security inspections could greatly reduce the work of inspectors [13]. Sentry robots use AI algorithms to realize patrol, hazard identification, alarm and firefighting, and other functions. Through robot patrol inspection, some safety problems can be found in time [14].

By contrast, the overall cost of inspection robots is relatively low. Inspection robots equipped with machine vision and thermal imaging cameras can inspect any laboratory. More frequent preventive inspection by inspection robots can largely prevent safety accidents. Zhu et al. developed an intelligent fire detection system based on infrared image feedback control, which can achieve continuous tracking of the scene of the fire-extinguishing process by adjusting the yaw angle of fire monitoring. The intelligent fire monitoring feedback control system based on the infrared images for firefighting robots has the great potential [15]. The fire reconnaissance robot developed by Li et al. is based on SLAM’s location and thermal imaging technology, which can improve the efficiency of fire rescue and reduce fire casualties [16].

SGS, the internationally recognized inspection, identification, testing, and certification agency, recently launched the laboratory Automated Logistics System (BUS) robot “Xiaozhi”. “Xiaozhi” can intelligently and accurately complete the highly repetitive and negative point-to-point material delivery, avoiding the safety risks of chemical transportation by laboratory personnel. Especially in the case of the COVID-19 outbreak, the working mode of contactless delivery can reduce the risk of exposure for laboratory staff [17].

In this study, considering the characteristics of pharmaceutical chemistry laboratories, a flame identification system suitable for patrol robots is designed by combining flame identification with an infrared thermal imager.

2. Methods

2.1. Flame Recognition and Detection Based on Camera Image

Under the monitoring of the visual system, the flame is special in nature. It is mainly orange-red in color, and the flame shape is constantly variable [18]. We extracted the flame characteristics to build a flame model. After identifying the flame by the color of the pixel, we can outline the flame according to the shape of the flame for display and result description.

The distribution of the flame color is not invariable and it is mainly affected by conditions such as burning substance, oxygen environment, observation distance, and combustion adequacy (Figure 1). In human vision and RGB color space, the main color of the flame is red and the proportion of red is within a certain range. At the same time, through the analysis of Red, Green, and Blue attribute values, it can be found that there is a certain logical relationship among the R, G, and B values [19].

The algorithm flowchart is shown in Figure 2.

As shown in Figure 2, the algorithm first calculates and processes the relevant parameters of the camera image, such as the RGB value and the maximum and minimum values of RGB. The algorithm designed in this paper needs to pass two judgments to decide whether the camera image is a flame. The first judgment is based on the logical relationship between the threshold value of the R channel and the attribute values of R, G, and B, and the range of the attribute components, which are designed by studying and analyzing the color law of the flame. The specific conditions are as follows:(1)R > RT(2)R > G > B(3)S = 1 − 3 × min/(R + B + G), S > (255 − R)/(RT/ST), S > (255 − R)/20, S > 0.2

In the formula, S can be expressed in another way, as follows:

The above formula indicates that S is the saturation and indicates the degree of color dilution.

Through the analysis of the R, G, and B of different flame images, it can be seen that the G attribute value of the flame has a specific range, so the second judgment is to further determine whether it is a flame through the mean square error of the G attribute value of the image. Among them, the variable Flag is set to complete the two loops to obtain the R, G, and B mean values and mean square deviation.

After the flame features were extracted, the flame image was processed through image preprocessing and image morphology. Through the feature algorithm and binarization, the binarization image was obtained and converted into a grayscale image. Because the binarization image was disturbed by many factors, and the flame shape was not complete, the image needed to be processed through mean filtering to make the flame part more complete and fuller. Then, the threshold processing was carried out, and the mean filtering was used to process the independent points in the image. Finally, the edges and details of the flame were smoothed through etching and swelling treatments.

After flame identification, it is necessary to identify and depict the flame profile in order to visually represent the flame shape and mark the flame. In OpenCV, the design mainly uses two functions, namely findContours and drawContours. FindContours is mainly used to extract contours and its calling form is as follows: void findContours (Input Output Array image, Output Array of Arrays contours, Output Array hierarchy, int mode, int method, Point offset = Point).

Next, there are several common methods to describe the contour, such as vectorization into polygons, circles, rectangles, and so on. This design used the rectangle method, mainly using the boundingRect function to call.

The drawContours function draws the outline according to its description and is called as follows: void drawContours (Input Output Array image, Input Array of Arrays contours, int contourIdx, const Scalar& color, int thickness = 1, int lineType = 8, Input Array hierarchy = no Array, int maxLevel = INT_MAX, Point offset = Point).

2.2. Flame Identification and Detection Based on Infrared Thermal Imaging

According to the current technology, the sensors used in infrared imaging are divided into two categories: one is the uncooled thermal imager represented by vanadium oxide, and the other is the refrigerated thermal imager that needs to work at low temperature. The refrigerated thermal imager features high sensitivity, fast response speed, and clear images, but it is large and expensive. It is generally used in the military industry. Nonrefrigeration equipment sensitivity is slightly lower, but basically in line with civil requirements, and without low-temperature refrigeration. So, its price is only one-tenth to a fraction of that of refrigeration equipment. At present, mainstream uncooled detectors generally use vanadium oxide or polysilicon. In addition, an infrared imaging system should include infrared germanium lenses with corresponding transmission wavelengths, electronic components for electrical signal processing and control, as well as corresponding software programs. Taking the uncooled focal plane infrared thermal imaging system as an example, it consists of an optical system, spectral filtering, infrared detector array, input circuit, readout circuit, video image processing, video signal formation, timing pulse synchronization control circuit, monitor, and so on.

The infrared radiation of the measured target received by the optical system can be reflected on the photosensitive elements of the infrared detector array on the focal plane through spectral filtering. After the detector detects the thermal radiation, its array will output the grayscale image of the target point. After digital image processing, the contrast display in the picture can be optimized to obtain better visual effects [20].

In this experiment, an infrared thermal imager control system was selected as the experimental platform, which could measure the temperature of any element point in the image and transmit video streams through FLIR Tools [21].

The FLIR infrared thermal imager is the core of the infrared thermal imager control system. The mobile sentry robot can identify the position and fire size of the fire point through the infrared thermal imager vision algorithm of this system. The functional architecture of the detection and control system is shown in Figure 3.

As shown in Figure 3, the main structure of the flame recognition control system based on infrared thermal imaging includes an infrared thermal imager, embedded development board, and alarm bell. An infrared thermal imager consists of a detector, an optical imaging objective, and a photosensitive element with which a thermal image is generated. The embedded development board receives images from the infrared thermal imager and performs image visual recognition. Through the subsequent thermal image processing algorithm, it extracts the area above a certain temperature in the image and finally determines the size relationship between the extracted area and the critical area. If the extracted area is larger than the critical area, it returns a dangerous signal. The alarm bell, after receiving the dangerous signal from the embedded development board, completes its own circuit to sound the alarm [22].

The operation interface of the infrared thermal imager is shown in Figure 4.

As shown in Figure 4, in addition to the target image, there is the temperature bar area, the highest temperature digital area, the lowest temperature digital area, the FLIR flag area, and the charging flag area in the thermal image. After receiving the thermal image from the infrared thermal imager, the embedded development board firstly extracts the temperature value digital area and temperature bar area, uses template matching to identify extreme temperature values according to the digital pixel characteristics, and stores returned values in two predefined variables.

In addition, these areas are black and white, causing great interference with the target image processing. Therefore, before image processing and flame extraction, these areas are covered and then transformed into grayscale images. The measurements show that there are five areas:bar0 = src(Rect(305, 28, 10, 185))bar1 = src(Rect(280, 8, 33, 14))bar2 = src(Rect(280, 219, 33, 14))bar3 = src(Rect(8, 6, 36, 17))bar4 = src(Rect(3, 218, 54, 20))

The threshold function is applied to the binarization of five areas for coverage.

After local coverage of the image, the shape and outline of the flame were obtained by comparing each point on the image with the established threshold value. Since the thermal imager used in this paper has a temperature bar display on the interface, the temperature bar can be used to determine the temperature threshold and pixel threshold. According to the linear relation between the temperature and the coordinates on the temperature bar, the following formula was used to obtain the threshold coordinates on the temperature bar:where is the coordinate of the corresponding point, is the maximum temperature, is the minimum temperature, is the threshold temperature, y1 is the lowest temperature coordinate, and y2 is the highest temperature coordinate.

Take the average pixel value of the temperature line of the temperature bar threshold and use this value as the threshold pixel value. Then, a point-by-point pixel value comparison was carried out on the image. When the pixel value is greater than the pixel value threshold, it is determined as the flame point. Finally, the threshold function was used to represent the flame part with white and the nonflame part with black, and the shape of the flame was extracted. This area was used as the signal to trigger the alarm [23].

2.3. Construction of Mobile Sentry Robot

The mobile sentry robot platform using an STM32F407 processor can detect flames through an infrared thermal imager and realize the functions of patrol, self-identification of flames, confirmation of fire size, and alarm or extinguishing [24].

The robot can move freely within a specific range, patrol automatically, detect obstacles through ultrasonic sensors, and turn automatically when the distance is shorter than the set value. At the same time, the infrared thermal imager rotates in the horizontal direction to identify the surrounding environment. When a flame is recognized, the robot will turn toward the flame, approach the flame, sound an alarm, and take the next fire-extinguishing measures. The robot is composed of a mechanical structure system, hardware structure system, circuit control system, thermal imager control system, and communication system [25].

2.3.1. Mechanical Structure System of a Robot

In the mechanical structure system, the main design department is divided into the chassis based on mecanum wheels, the fire spray device, and the infrared thermal imager single-axis cradle head (Figure 5).

As an important research direction of the mobile robot platform, the mobile robot platform can be wheeled, tracked, and legged or with a composite structure, which needs to be determined according to the actual application scenarios [26]. According to the application scenario analysis, the patrol route of the sentry robot is fixed, the terrain is flat, and the robot needs to move in all directions. On this basis, a horizontal comparison was performed for the structure: the crawler chassis structure is relatively complex and hard to maintain; the control of the leg-type structure is complicated, and it cannot give full play to its advantages on the flat ground and many scenes where obstacles cannot be overcome; wheel structure control is mature and easy to maintain, so the wheel structure for the chassis design was adopted in this paper.

Among the wheeled structures, there are three kinds of structures commonly used in fixed-route patrol: unidirectional, bidirectional, and omnidirectional. And omnidirectional movement is the most widely used. At present, the only structure that can move in all directions without changing the direction of the car body is the mecanum wheel structure, which was introduced in 1973. A number of small rollers are arranged diagonally along the edge of the mecanum wheel, each of which is able to rotate around the axis of the mecanum wheel, around its own axis, and around the point at which the mecanum wheel contacts the ground. The wheeled sentry robot based on mecanum wheel technology can realize forward, backward, horizontal, lateral, oblique, and rotation movements of the robot and a combination of these movements, which has obvious advantages in limited space operation [27].

The mecanum wheel is an omnidirectional moving wheel. Its main components are a fixed caster and a series of movable casters. The central axis of the movable casters is at a 45-degree angle to the axis of the fixed caster. Therefore, when the mecanum wheel moves, the movable casters come into contact with the ground and the friction is at a 45-degree angle to the fixed caster. When the movement direction of the two wheels is the same, the forces of the two wheels form a 90-degree angle and the resultant force is in the vertical direction. Then the car can move back and forth. When the wheels on both sides of the car are not turning at the same time, the resultant force of the wheels on both sides is horizontal so that left-right movement can be completed without rotation of the car body. The principle of motion is shown in Figure 6. Red means the wheel moves forward and blue means the wheel moves backward.

The structure of the fire-extinguishing spraying device is shown in Figure 7.

The jet structure is controlled by two linear motors, which can realize the rise and fall of the jet device so as to expand the fire-extinguishing scope. The pressing structure is controlled by two screw motors to realize the effect of simulating the fire hydrant being pressed by hand.

Figure 8 shows the structure of the single-axis cradle head of the infrared thermal imager.

The infrared thermal imager was installed on the cradle head, and the rotation of the cradle head was realized by the steering gear. The infrared thermal imager can detect the surrounding environment through rotation of the cradle head. This structure expands the field of vision and can obtain environmental information more conveniently.

2.3.2. Hardware Structure

The main control board used an STM32F407 chip as its processor. The four 3510 motors of the chassis adopted special 3510 electric modulation and the four electric modules were connected to the main control board; the steering gear of the cradle head was connected to the main control board; the linear motor of the transmitting mechanism was connected to the main control board after passing through the high-power motor drive module; the two positioning ultrasonics passed through an auxiliary data processing module with an STM32F103 chip as the processor and were then connected to the main control board; the thermal imager was directly connected to the main control panel.

2.3.3. Circuit Control System

The chassis motion control adopted an STM32 embedded control system to adjust PID (proportion, integration, and differentiation) of four 3510 motors [28]. The control algorithm flow is shown in Figure 9.

The control of the cradle head steering gear is realized by sending a PWM pulse square wave every second through the chassis so that the steering gear rotates at a certain angle. When a flame is detected, the main control board obtains the marking position, reads the angle of the steering gear at that time, and uses the gyroscope to rotate the chassis to this angle (Figure 10).

Ultrasonic obstacle avoidance control: use the auxiliary board to send out trigger signals according to the protocol of the module and then receive the echo-response signal to calculate the distance from the obstacle [29]. When the detected distance is shorter than 50 cm, the auxiliary board will send a low-level signal to the main control board, which makes the car rotate 180° on the spot after logic calculation (Figure 11).

MPU6050 control is realized through reading the data of MPU6050 calculated by DMP through IIC and obtaining the attitude angle of the fire robot in real time after Kalman filter [30]. When the fire robot receives the information that it needs to turn, the attitude angle will be obtained and the robot will be rotated to the specified angle [31].

The control of the fire extinguisher press device is realized through two small screw motors. When a forward voltage is applied, the motor will turn forward and vice versa. The jet tube is lifted by a linear motor. The main control sends a PWM signal to the click drive module to control the voltage and direction of the linear motor and then changes the rise and fall of the jet pipe and the speed.

2.3.4. Infrared Thermal Imager Control System

The infrared thermal imager control system mainly includes an infrared thermal imager module and Open Source Computer Vision Library (OpenCV) control algorithm. The infrared thermal imager displays the maximum and minimum temperature values and the “color-temperature” temperature bar for this temperature range. According to the characteristics of digital pixels and starting from the extraction temperature digital area of the temperature area, vision algorithms use a template-matching method to identify extreme temperature values and return values are stored in two predefined variables. When the target temperature is set in advance, extract all pixel values of the row corresponding to the target temperature in the temperature bar, average these pixels, and use this value as the threshold parameter of the threshold image to extract the area above the target temperature [32].

2.3.5. Related Research on Path Planning

Path planning usually refers to the following: the robot senses the environment through the sensor and plans its running path. There are three problems to be solved in this process: (1) the robot can reach the destination from the starting point; (2) algorithms are used to allow the robot to avoid obstacles or undertake tasks; and (3) try to optimize the operation route on the premise of completing the task [33].

The commonly used path planning methods are as follows:(1)Template-matching path planning technology: matching the template is to summarize and collect the path planning information used or generated by establishing a template library and then compare and match the current task and environment information with the template library during the path planning, so as to find an appropriate template to match and modify. When the environment is determined, the template-matching path planning technology can be used well.(2)Path planning technology of artificial potential field: the artificial potential field assumes that the robot moves in a virtual artificial potential field. Obstacle points generate repulsive forces, and target points generate gravitational forces. The combined forces of the two control the robot to avoid obstacles and reach the destination. In the early stage, the artificial potential field in a static environment was studied, but it is still difficult to solve the problem of the algorithm design of gravitational and repulsive forces in a dynamic environment.(3)Map construction path planning technology: map building is to divide the robot into different grid spaces based on environmental information, especially the obstacle information obtained by the robot’s own sensors, calculate the possession of obstacles in the grid spaces, and then calculate the optimal path according to the algorithm. The map construction path planning technology is divided into the grid method and the road-marking method. The grid method is to decompose the robot periphery into connected and nonoverlapping units, while the road-marking method is to realize path planning by building a feasible path graph composed of mark points and connecting edges.(4)Artificial intelligence path planning technology: it is to use artificial intelligence technology in robot path planning, such as the artificial neural network and evolutionary computing. The genetic algorithm is the first intelligent algorithm used in the combinatorial optimization problem. In the field of path planning, the algorithm is used in the ant colony algorithm and others. Many scholars have introduced the ant colony optimization algorithm into the path planning research of underwater vehicles.

3. Results

3.1. Result of Flame Recognition and Detection Based on Camera Image

First of all, images and videos were collected. Then, OpenCV was used to conduct phased processing of images to obtain data. The experimental process and results are shown in Figure 12. It can be seen from the figure that the results of the image processing at each stage were obvious and the overall flame was basically outlined. Finally, the results can help to judge where the fire source was, and take corresponding measures to deal with the fire source.

We randomly selected 19 pictures from the collected samples for flame identification. The types and results are shown in Table 1.


Image typeNumber

Flame figures/all figures11/19
Recognizable figures/flame figures11/11
Identifiable incomplete figures/flame figures3/11
Mistakenly identified figures due to interference/interference figures2/8

It can be seen from the statistical results in Table 1 that the flame recognition system was not complete and the main reason may be that the algorithm only identifies the orange within a certain range, but the flame center is usually white. Although the identification of the flame center needs to be improved, the periphery of some high-temperature flames can be identified based on the experimental results. Among them, the false recognition rate of interference graphs was 25% and the main reason was that there were more concentrated color patches similar to the flame model, which means that the flame recognition method still has room for improvement.

The flame recognition method based on camera images was mainly studied in this experiment. Based on the flame recognition principle of RGB color space, in this paper, the flame color was divided, the flame color model was established, and the image was then processed. The experimental results can be used for further human-computer interaction research.

3.2. Result of Flame Recognition System Based on Infrared Thermal Imaging

The infrared thermal imager was used to collect images and videos, OpenCV was used to process the images, and MATLAB was used to analyze the data.

Two groups of experiments were carried out separately. The first group of experiments was conducted in an indoor environment with human body temperature as the threshold temperature. The test distance was relatively close and the limit was within 1 meter. Some experimental results are shown in Figure 13.

The results show that the flame recognition method of infrared thermal imaging can basically recognize 70% of the complete target objects in the image and can display the general outline and direction of the target, which is convenient for the realization of further path planning.

The second group of experiments was carried out in an outdoor environment with flame temperature as the threshold temperature. The infrared thermal imager was about 1.5 meters away from the fire source. The experimental results are shown in Figure 14.

The experimental results show that the infrared thermal imaging flame identification method is highly accurate with a complete flame area in flame identification, which is helpful for calibrating the flame center or calculating the flame area.

In addition, the temperature threshold was tested, and the results are shown in Table 2.


NumberThreshold temperatureImage

1100
2120
3140
4150
5170

As can be seen from Table 2, when a low temperature threshold was set, the identified area was likely to be larger than the actual flame area. In addition, because of the low temperature threshold, misidentification occurred during the test, resulting in more interference. However, when the threshold temperature was high in the test, such as 170°, the initial flame temperature did not reach 170°, and the flame could not be recognized in this case, which was not conducive to timely detection and control of fire in real life. After repeated tests, it can be found that the optimal temperature threshold was about 140°. In this case, the newly generated flame could be recognized more quickly, and the flame image with a large burning area had less interference.

3.3. Function Demonstration of the Mobile Sentry Robot

The patrol robot in this article was designed based on a relatively simple temperature environment. The purpose was to use an infrared thermal imaging camera to detect flames, then identify the flames through the OpenCV algorithm, and then adjust the patrol robot to perform fire-extinguishing operations through the master control (Figure 15).

3.3.1. Results of Flame Identification

First, in this paper, a physical flame recognition experiment was conducted on the patrol robot, the flame recognition distance was obtained, and the influence of the distance on the recognition temperature was analyzed. The robot was about 7 meters away from the flame, and the actual flame area was about 0.16 square meters at most and 0.0225 square meters at least. Cardboard was used as fuel and placed on an outdoor plaster floor. As can be seen from the camera and the processing screen, the flames were still very clear at 7 meters, as shown in Figure 16. Then, by comparing the flame temperature in the thermal image at different distances, the distance can be increased by 1 meter, and the flame temperature in the image decreased by about 1°. Finally, the temperature threshold was adjusted, thus reducing the influence of the distance factor. In addition, the thermal imager camera was 66 centimeters above the ground, with a visual angle range of about 24°. Therefore, the ground within 1.5 meters from the patrol robot could be observed. It can be seen from the results that the infrared thermal imaging flame recognition system of the robot had a wide field of vision, a wide range of recognition, and high accuracy for remote flame recognition and can identify flames with a small area, achieving the purpose of patrol robot design.

3.3.2. Results of Fire Judgment

A patrol robot was used to record the flame area, so as to analyze the data to judge the fire. The combustibles in this experiment were paper, and the distance between the patrol robot and the flame was 2 meters. Table 3 shows the statistical results of data recorded every five frames.


No.The flame area (pixels)

1339
2364
3383
4375
5362
6345
7363
8368
9419
10438
11459
12450
13545
14958
151080
16430
17586
18834
191279
201529
211370
221205
231151
241087
251086
261083
271140

A total of 353 data were obtained in this experiment. With the time as the X-axis and the flame area pixel points as the Y-axis, a coordinate system was established. Figure 17 shows the scatter diagram and curve of MATLAB.

It can be seen from Figure 17 that, at about the 25th point, the flame area had a peak value, which was presumed to be the second fire source in the initial stage. From the 50th to 100th point, the flame had a maximum value, which was presumed to be the time when the fuel burnt violently and the first and second fire sources spread. After taking the point 100 times, the flame trend gradually dropped, which shows that the flame may be in a decay stage. For the flame area peak appearing after 100 times of point taking, considering that the combustible material was a spherical paper ball, it was speculated that the internal combustion started due to contact with air. According to the general trend of the curve, it is applicable to judge the fire by identifying the flame area.

3.3.3. Firefighting Demonstration

The actual firefighting situation is shown in Figure 18. Based on the machine vision, the robot identified the fire source and adjusted the spray device to carry out the fire-extinguishing experiment successfully. The experiment shows that the range of the elevation angle of the spray device was small, which does not suffice for high-intensity fire extinguishing in large buildings. Therefore, the spray angle still needs to be improved.

4. Discussion

In this paper, we focused on mobile sentry robots used in laboratory patrols and proposed a flame image recognition method based on FLIR’s C2 infrared thermal imager. Through the linear relation between temperature and the pixel value, a model was established and the flame recognition and image processing were carried out. This method can quickly and effectively identify the flame in the patrol robot test. In addition, the Visual Studio integrated development environment and OpenCV library were used to test the flame recognition system of camera images and the infrared thermal imaging flame recognition system, respectively, through C++ programming. A thermal imager was installed on the single-axis head of the patrol robot and tested. The experimental results show that the infrared thermal imaging flame recognition algorithm is feasible. Moreover, learning infrared thermal imaging image datasets can enhance the robot’s accurate identification of common heat sources in the laboratory, such as fluorescent lamps, air conditioners, and other facilities, to avoid false positives.

Different from the traditional video fire detection system, the flame recognition system proposed in this paper pays more attention to the characteristics of recognition efficiency, anti-interference, and flexibility and portability. When selecting the flame recognition mode, this study compared and analyzed the two flame recognition modes of camera imaging and infrared thermal imaging from multiple angles including cost, effect, efficiency, and other factors and finally adopted the infrared thermal imaging recognition method.

In practice, the mobile sentry robot can automatically patrol in pharmaceutical chemistry laboratories and avoid obstacles with the thermal imager constantly scanning its ambient temperature. When determining the environment is a region of high-temperature processes, the mobile sentry robot will stop patrolling robots, turn the car towards the fire, then get close to extinguish the fire, and sound the alarm at the same time, which can reduce the pressure of routine safety inspection in chemistry laboratories. If laboratory fire is put out in time in the early stage, the casualties and loss of properties of the laboratories will be greatly reduced.

Flame recognition and tracking in videos has been achieved in the research. How to use machine vision for obstacle avoidance and path planning is a focus of the development of patrol robot vision. Existing research has proposed many mobile robot path planning methods based on different algorithms, so it is very feasible for the path planning of patrol robots to be the next research content.

5. Conclusion

Due to frequent accidents in university laboratories in recent years, the safety inspection of laboratories is particularly important. Safety inspection system will nip the possible safety problems in the bud, especially when no one is on duty, such as off duty and holidays. Timely detection and timely treatment are one of the important means to prevent the occurrence of laboratory safety accidents.

The development of artificial intelligence and robot technology provides a new prospect for laboratory safety management. This project is the first to carry out an applied research on the combination of artificial intelligence and laboratory safety management for pharmaceutical professional laboratories.

Use the robot to assist laboratory safety patrol in order to reduce the incidence of laboratory safety accidents to a minimum. The development and utilization of safety patrol robots can not only increase the means of laboratory safety management, save manpower cost, increase the frequency of inspection, and greatly reduce the risk of laboratory safety accidents, but also create a harmonious and safe environment for teaching and research so as to ensure the sustainable development of various undertakings in colleges and universities. As the development of AI enters a new stage, interdisciplinary cooperation will highlight advantages and promote each other. With attention to detail, we should create a safe experimental environment to provide the most basic guarantee for the smooth development of teaching and research experiments. Therefore, the mobile sentry robot has a good application prospect for laboratory security patrol.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Authors’ Contributions

Dong Zhang conceptualized the study, Siqi Gang was responsible for data curation, and Zhongyi Guo wrote and edited the manuscript.

Acknowledgments

This work was supported by a grant of Achievement Transformation Project of Basic Scientific Research Operating Expenses of Central Universities, China (no. 2019CG02).

References

  1. K. Huang and L. I. Yan-qi, “Analysis and countermeasures for the safety management of university laboratories,” Research and Exploration in Laboratory, vol. 34, no. 1, pp. 280–283, 2015. View at: Google Scholar
  2. X. U. Hong-zhen and L. I. Xian-xuan, “Research on safety management and countermeasures of laboratory in universities,” Education Teaching Forum, vol. 3, no. 13, pp. 16-17, 2020. View at: Google Scholar
  3. Y. Ye, M. A. Jing, Y. Zhao, Y. Shen, and Z. Ren, “Statistical analysis and safety management countermeasures based on 150 laboratory accidents,” Experimental Technology and Management, vol. 37, no. 12, pp. 317–322, 2020. View at: Google Scholar
  4. Q. Li and X. Lanjuan, “Analysis and countermeasures of fire and explosion accidents in university chemical laboratory,” Shandong Chemical Industry, vol. 49, no. 15, pp. 253–258, 2020. View at: Google Scholar
  5. S. M. Guo, “The safety management of chemical lab in UC-san diego and inspiration,” Chinese Journal of Chemical Education, vol. 41, no. 14, pp. 110–113, 2020. View at: Google Scholar
  6. H. Meng, K. Ouyang, G. Bao et al., “The inspection module of robot for detecting large CFB boiler furnace wear,” in Proceedings of the 2018 IEEE International Conference on Real-Time Computing and Robotics (RCAR), IEEE, Kandima, Maledives, August 2018. View at: Google Scholar
  7. H. Ando, Y. Ambe, T. Yamaguchi et al., “Fire extinguishment using a 4m long flying-hose-type robot with multiple water-jet nozzles,” Advanced Robotics, vol. 34, no. 8, pp. 1–15, 2020. View at: Publisher Site | Google Scholar
  8. Y. F. O. O. Simon, “A rule-based machine vision system for fire detection in aircraft dry bays and engine compartmens,” Knowledge-Based Systems, vol. 9, pp. 531–540, 1996. View at: Publisher Site | Google Scholar
  9. W. Phillips III., M. Shah, and N. da Vitoria Lobo, “Flame recognition in video,” Pattern Recognition Letters, vol. 23, no. 1–3, pp. 319–327, 2002. View at: Publisher Site | Google Scholar
  10. T. H. Chen, H. Wup, and Y. C. Chiou, “An early fire-detection method based on image processing,” in Proceedings of the 2004 International Conference on Image Processing (ICIP 2004), Singapore, October 2004. View at: Google Scholar
  11. B. C. Ko, K. H. Cheong, and J. Y. Nam, “Fire detection based on vision sensor and support vector machines,” Fire Safety Journal, vol. 44, no. 3, pp. 322–329, 2009. View at: Publisher Site | Google Scholar
  12. Z. Yufei, L. Xingwei, W. Zhenshi, L. Xiaochuan, and W. Zepeng, “The research on algorithm for identifying early forest fire of infrared thermal imager,” Forest Resources Management, vol. 5, no. 10, pp. 119–124, 2012. View at: Google Scholar
  13. G. Tao and L. Fang, “A multi-unit serial inspection robot for power transmission lines,” Industrial Robot: The International Journal of Robotics Research and Application, vol. 46, no. 2, pp. 223–234, 2019. View at: Publisher Site | Google Scholar
  14. Y.-t. Liu, R.-z. Sun, T.-y. Zhang, X.-n. Zhang, L. Li, and G.-q. Shi, “Warehouse-oriented optimal path planning for autonomous mobile fire-fighting robots,” Security and Communication Networks, vol. 2020, no. 1, Article ID 6371814, 13 pages, 2020. View at: Publisher Site | Google Scholar
  15. J. Zhu, W. Li, D. Lin et al., “Intelligent fire monitor for fire robot based on infrared image feedback control,” Fire Technology, vol. 56, no. 3, 2020. View at: Google Scholar
  16. S. Li, C. Feng, Y. Niu, Shi, Wu, and Song, “A fire reconnaissance robot based on SLAM position, thermal imaging technologies, and AR display,” Sensors, vol. 19, no. 22, p. 5036, 2019. View at: Publisher Site | Google Scholar
  17. J. Yuan, Y. X. Wu, and C. Huang, “Development of an inspection robot for long‐distance transmission pipeline on‐site overhaul,” Industrial Robot: An International Journal, vol. 36, no. 6, pp. 546–550, 2009. View at: Publisher Site | Google Scholar
  18. M. Aliff, N. Samsiah, M. I. Yusof et al., “Development of fire fighting robot (QRob),” International Journal of Advanced Computer Ence and Applications, vol. 10, no. 1, 2019. View at: Publisher Site | Google Scholar
  19. S. Shen, Y. Yu, F. Yuan et al., “Renovated method for identifying fire plume based on image correlation,” Journal of Safety and Environment, vol. 7, no. 6, pp. 96–99, 2007. View at: Google Scholar
  20. M. Vivekanandan, R. Venkatesh, and S. Satheeskumar, “Randomly captured thermal imaging pictures in a routine life,” Materials Today: Proceedings, vol. 37, 2021. View at: Publisher Site | Google Scholar
  21. S. Sánchez-Carballido, C. Justo-María, J. Meléndez, and F. Lopez, “Measurement of the thermal parameters of composite materials during fire tests with quantitative infrared imaging,” Fire Technology, vol. 54, 2018. View at: Google Scholar
  22. C. He, K. Z. Liu, L. F. Shu et al., “The diagnostic methods for resurgences of smoldering fire in the forests by infrared thermal imaging,” Spectroscopy & Spectral Analysis, vol. 38, no. 1, pp. 326–332, 2018. View at: Publisher Site | Google Scholar
  23. K. Jong-Hwan, J. Seongsik, and B. Y. Lattimer, “Feature selection for intelligent firefighting robot classification of fire, smoke, and thermal reflections using thermal infrared images,” Journal of Sensors, vol. 2016, Article ID 8410731, 13 pages, 2016. View at: Publisher Site | Google Scholar
  24. J. Chen, L. K. Zhang, and M. Yang, “Design of a high precision ultrasonic gas flowmeter,” Sensors, vol. 20, no. 17, p. 4804, 2020. View at: Publisher Site | Google Scholar
  25. C. H. Huang, H. S. Hsu, H. R. Wang et al., “Design and management of an intelligent parking lot system by multiple camera platforms,” in Proceedings of the IEEE International Conference on Networking, IEEE, Boston, MA, USA, August 2015. View at: Google Scholar
  26. G. E. Geng-yu, “Overview of research on security patrol robot,” Computer Knowledge and Technology, vol. 14, no. 12, pp. 178–179+182, 2018. View at: Google Scholar
  27. C. Zhou, Research on the Key Technologies of Magnetic Guided Automated Guided Vehicle and Applications, Nanjing University of Aeronautics and Astronautics, Nanjing, China, 2012.
  28. H. Sun, X. Wang, Q. Lin et al., “The design of the dc servo motor controller based on fuzzy immune PID algorithm,” in Proceedings of the 2017 36th Chinese Control Conference (CCC), Dalian, China, July 2017. View at: Google Scholar
  29. S. Li, C. Feng, X. Liang et al., “A guided vehicle under fire conditions based on a modified ultrasonic obstacle avoidance technology,” Sensors, vol. 18, no. 12, pp. 1–15, 2018. View at: Publisher Site | Google Scholar
  30. P. Ghosh, J. A. Tran, and B. Krishnamachari, “ARREST: a rssi based approach for mobile sensing and tracking of a moving object,” IEEE Transactions on Mobile Computing, vol. 19, 2019. View at: Publisher Site | Google Scholar
  31. J. G. Mcneil and B. Y. Lattimer, “Robotic fire suppression through autonomous feedback control,” Fire Technology, vol. 53, no. 3, pp. 1–29, 2017. View at: Publisher Site | Google Scholar
  32. C. R. Steffens, R. N. Rodrigues, and S. S. D. C. Botelho, “Non-stationary VFD evaluation kit: dataset and metrics to fuel video-based fire detection development,” Robotics, Springer International Publishing, Berlin, Germany, 2016. View at: Google Scholar
  33. Z. Da-qi and Y. Ming-zhong, “Survey on technology of mobile robot path planning,” Control and Decision, vol. 25, no. 7, pp. 961–967, 2010. View at: Google Scholar

Copyright © 2021 Dong Zhang and Zhongyi Guo. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views377
Downloads440
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.