A Mirror Detection Method in the Indoor Environment Using a Laser Sensor
When laser scanning is performed in some indoor environment, because of the mirror reflection problem, the exact information of the scene cannot be restored. In this paper, a method to detect the mirror based on mirror symmetry principle is proposed. By finding the corresponding relationship between the actual object and its image in the mirror, the position of the mirror could be determined without integrating any other sensor information.
SLAM (simultaneous localization and mapping) is a method used by robots for mapping and location . It will provide basic information for robot navigation, obstacle avoidance, path planning, and execution of other tasks. So it is the key technique for a robot to explore independently in an unknown environment. SLAM will construct or update a map of an unknown environment based on sensor information obtained using an odometer, sonar, laser, and vision [2, 3]. Laser sensor has the advantage of high precision, and the measured data can be provided directly to the robot. It is widely used for SLAM algorithms. 2D reconstruction based on laser can reliably restore the real scene, but the mirror in the real scene cannot be restored. Domestic and foreign scholars have relatively few studies on this problem, so how to find the true position of the mirror in the environment is still a challenging issue.
When the robot explores an indoor environment, mirrors may be set in the room or on the wall. Because mirrors reflect nearly all incident light, the laser sensor cannot measure the correct distance, which will lead to a failure of reconstruction. The location of the mirror in the map constructed by SLAM will not display correctly. Therefore, the detection of the mirror is still a challenging problem. As shown in Figure 1(a), when a robot equipped with a laser sensor (Velodyne VLP-16) scans a scene with a mirror, it will construct a two-dimensional map of the environment, as shown in Figure 1(b). The result shows that the location of the mirror in the real world will not be detected and it creates a gap in the map. SLAM algorithm will identify the gap as the blank area, and obviously it may cause robot damage on perceiving the environment.
In the past few years, some researchers have put forward some solutions to solve the problem for finding the location of the mirror in the environment. Yang and Wang used a sensor fusion method integrated with sonar sensor and laser sensor . A laser sensor can provide detailed and accurate environment information. Sonar sensors can measure the distance of the mirrors. Through sensor fusion, mirrors and glasses could be detected. However, this method relies on additional hardware facilities and fusion algorithms, which makes the operation more complicated. Yang and Wang proposed a solution to detect and track mirror using only laser information . The mirror is detected by utilizing the property of mirror symmetry. But its limitation lies in the use of the mirror step for the detection of mirror location; otherwise, it will not be detected. Tatoglu and Pochiraju only use laser sensors and the characteristics of specular objects to build an illumination model for detecting and identifying the object . However, he did not make a detailed analysis of the mirror inspection. In recent years, Kim and Chung  and Wei et al.  have put forward some related algorithms to solve the problem on glass, not mirror.
In this paper, we propose a new method to detect the location of mirrors. This method is different from the previous research. It does not need additional sensors and only uses the property of mirror symmetry. If there is any restriction on the boundary or the position of the mirror, we estimate and update the probability of the mirror’s existence by studying the symmetry between the robot and its images in the mirror to verify the location of mirror and find the boundary of the mirror through continuous motion of the robot. The experiment result shows that our method can detect the location of the mirror successfully and effectively.
Because of reflection, the mirror cannot be detected with a laser sensor, but it will form a virtual image and the image can be detected with the laser sensor. Therefore, it is a simple and feasible method to detect the mirror based on the symmetries of real objects and their images in the mirror. However, because of the complexity of the environment, it is difficult to find symmetrical relationships from the data obtained with the laser sensor. Therefore, it is essential to find an object as the reference for mirror symmetry detection. Considering the robot will also form an image in the mirror, and we can estimate the position of the robot by the Rao-Blackwellised particle filter (RBPF) SLAM algorithm, so estimating the position of the mirror image of the robot will be a feasible method to detect the mirror.
2.1. Preparation Knowledge
Geometrical symmetry will be used to find the location of the potential mirror and determine the existence probability of a mirror with a probability model. Our method is based on the symmetry of the robot and its image, so the robot’s position and the mirror image position are the key information to determine the mirror [9, 10]. This algorithm uses the RBPF SLAM to estimate the position of the robot [11−13], which can represent the nonlinear motion model of the robot, and uses observation model to determine the coordinates of the obstacles. Our algorithm is integrated into RBPF SLAM and completes mirror detection task while constructing real-time map.
The SLAM problem can be expressed as the problem of solving the estimation of the probability , where is the robot’s pose information, m is the map, is the observation information, and is the control information. By RBPF separation, the localization and mapping are calculated separately to solve SLAM in a lower dimension, as follows:where is the joint probability of pose and map, is the separated pose state estimation, and is the separated map estimation. The formula above is applied to separate pose state estimation and map estimation. The prior probabilities of robot pose can be estimated by RBPF particle filter. Each particle represents a trajectory and an environmental map. According to the independence of all maps, Kalman filter can be used to estimate maps recursively. The main steps of RBPF SLAM are listed as follows:(1)Initialization: according to the prior probability of the robot’s motion model , N particles are selected and denoted as . The weight corresponding to each particle is .(2)Sampling the particle set from the proposal distribution: according to the proposal distribution sampling, the next generation of the particle set is generated from the particle set . Usually, the odometer motion model is used as the proposal distribution .(3)Computing the important weights: using observation model to assign each particle a weight.(4)Resampling: taking the effective number of particles as a criterion for resampling. When the number of effective particles is less than , resampling is carried out. According to the weights of particles, larger weights particles replace the smaller ones. After resampling, all particles have equal weights.(5)Map updating: according to the known pose and observation , the map information of each particle is updated.
Up to now, the estimated robot position is denoted as and its moving path is . Except for the posture of the robot in the world coordinate system, we also need to locate obstacles in the environment. This part needs to use the laser sensor scanning information, the current robot, and laser sensor posture information. The obstacle’s position is calculated as follows:where are the coordinates of the obstacle in the world coordinate system. are the coordinates of the mobile platform in the world coordinate system. is the orientation of the mobile platform. are the coordinate of the laser sensor in the moving platform coordinate system. is the distance of the obstacle relative to the origin in the laser sensor coordinate system. is the angle of the laser beam.
2.2. Proposed Mirror Detection Algorithm
As we can see from Figure 1, the goal of mirror detection is to distinguish between the blank area and the mirror reflection area. We can make a comparison of the actual location of the robot with all the obstacles of laser scanning. If the robot’s image could be found and confirmed, we can determine whether the blank area is a real mirror.
2.2.1. Symmetry Axis Detection
We can simplify the map of a scene as in Figure 2, the red line represents a mirror, N3 represents an object in the environment, N2 is the mirror image of N3, R is estimated location of robot, and N1 is the robot’s image. The robot will build the map in real-time during the moving process. Moreover, the moving speed, direction, and size of its image in the mirror are completely consistent with the actual robot. So we can use this feature to find the location of the mirror.
In the process of map building, we analyzed the laser scanning data at each time to find all the symmetry axis of the robot and its mirror image. If the obstacle scanned by the laser sensor is the robot’s mirror image, a mirror will locate at their symmetry axis. Because the mirror located in the environment will not move, the image of the robot will also move with the movement of the robot. Therefore, the symmetry axis, which is the location of the mirror, should always be at the same location at each moment. If the robot has the symmetric points about the same axis in successive moment, there will be a probability that a mirror exists. The basic process is shown in Figure 3. We assume that the coordinate of the robot at the time is . The obstacles scanned using the laser sensor are as follows: . The coordinates of all obstacles are , respectively. We can work on simple geometric relationships. In the case of two points, which are located at and , their symmetry axis is expressed as . Here, k is the slope of two points’ connection and , , and are the center of the two points, where and . Then, we can get all the symmetry axes set at the time of , and is the symmetry axis between the position of robot and object m. As shown in Figure 3, are symmetry axes of points , , and , where the mirror is located at .
During the subsequent scanning process, the robot needs to analyze all the symmetry axes obtained in the previous moment to determine whether the current position of the robot has a symmetrical point about all the symmetry axes.
2.2.2. Probability Updating
During the process of robot movement, if there is a mirror, the mirror image will exist at the symmetry position of the robot and the symmetry point will not disappear when the robot moves. Therefore, we can analyze the symmetry of scanning points at each moment. This is a decisive factor in whether the robot can always find a symmetry point based on the same symmetry axis in a few successive moments. If this condition is met, the probability that the symmetry axis is the location of a mirror will increase. In contrast, the symmetry relation of the image and the robot about the same symmetry axis will disappear with the movement of the robot. Then, such a line is not the location of a mirror.
The symmetry axes calculated in each moment are likely to be the position where the mirror is located. The algorithm calculates and updates the probability that the symmetry axis is the position of the mirror during the movement of the robot:(1)At the moment , we assume that every point of the laser scanning is likely to be a mirror image of the robot, which can obtain a symmetry axes set .We assume that the probability is , in which represents the probability of Mth symmetric axis is a mirror.(2)With the movement of the robot, the laser sensor will obtain the environment data of successive time, , , , etc. We will calculate whether the symmetrical position of the robot at each moment exists symmetry point about all symmetry axes and update according to following conditions.
Condition 1. Symmetry point release: when the robot moves to a new position, the previous symmetry point at previous moments disappears, as shown in Figure 4. The robot moves from the location to , the original symmetry point disappears, and symmetry point is generated in the new position. Under this circumstance, it is possible that the original mirror image disappears and a new image is created in the new location while the robot is moving.
Condition 2. Symmetry point hold: when the robot moves to a new location, the symmetry point does not disappear at previous location, but a new symmetry point exists in the new location. As shown in Figure 5, in this case, some obstacles, such as walls, exist in the symmetrical position of the symmetry axis, which coincides with the moving direction of the robot.
Assuming that the robot is moving within the range of the mirror, there is always a symmetrical point in the symmetrical position of the mirror. It is feasible to update the symmetry axis as the probability of the mirror according to whether symmetry point disappears in the previous moments. At the moment , the probability of symmetry axis being a mirror could be written aswhere represents the probability of whether a symmetry point exists in the symmetrical position of robot at the moment . If symmetry point exists, is assigned a larger value, or is a smaller value. is the status probability whether symmetry point exists in the symmetrical position of the robot at previous moment. If the mirror image at moment disappears in the current position, is a larger value; however, if previous mirror image does not disappear, is a smaller value.
As the length of the mirror is constant, we need to detect the mirror in limited time with the movement of the robot; otherwise, if the robot moves outside of the mirror, the above conditions cannot be satisfied. In the process of updating probability, the parameters and are selected based on several experiments. The experiments show that the result can meet the requirements when is between 0.8 and 0.97 and is between 0.75 and 0.92. Thus, we choose as , and as . The probability that the symmetry axis is a mirror could be written asThat is, the probability will be updated at each moment with the movement of robot and obtain an averaged value. is selected to determine whether there is a mirror. When , we regard that there is a mirror; otherwise, there is no mirror. The mirror detection algorithm flow is shown in Algorithm 1.
2.3. Mirror Boundary Detection
Through the above steps, we can detect the symmetric axis where the mirror is located. However, the mirror boundary is still unknown. We need to find the starting and end point of the mirror. Our algorithm will traverse the path of the robot moving and find all the symmetry points of the symmetric axis. If we find successive symmetry points of the robot about the symmetric axis, the location where first symmetry point occurred is the starting point and the last one is the ending point. If the symmetry axis is found, it is perpendicular to the direction of the robot’s movement. It is impossible to find the boundary of the mirror. Then, the robot needs to move parallel to the mirror to ensure the boundary of the mirror can be found.
If the first robot symmetry point is at the moment, then the last robot symmetry point will be at the moment. The starting point of robot is , the symmetry point is , the ending point is , and the symmetry point is , then the boundary of mirror, respectively, is
3.1. Experiment Environment
In our experiment, a Velodyne VLP-16 laser sensor is equipped on the top of the robot. The robot is moving constantly in an indoor environment, running FastSLAM and mirror detection algorithm. In order to prove the effectiveness of the algorithm, we choose three different scenarios, which basically reflect the different situations of indoor mirror distribution. They are single-sided mirror hanging on the wall, the single mirror is placed in the center of the hall alone, and the two mirrors are placed face-to-face, relatively. As shown in Figure 6(a), a single mirror is hanging on the wall, which is a common situation in the indoor scene. The mirror and the wall are in the same plane; the algorithm needs to distinguish the mirror with the gap between the walls. Figure 6(b) shows a single mirror placed in a wider space, this scenario simulates certain indoor environment (such as some large shopping malls will mirror hung on the pillar, and the mirror will completely block the wall); the difficulty is that there is no reference for the mirror detection, and in order to increase the difficulty of detection and proof the effectiveness of the algorithm, we place two mirrors in a corridor, as shown in Figure 6(c). In this scene, two mirrors will image each other, which will greatly increase the complexity of the detection.
The laser sensor used in our experiment is Velodyne VLP-16. It is a 3D laser sensor with 16 channels, and the vertical FOV of the laser sensor is 15 degrees. We extracted the scanning data of each channel and obtained its scanning data. The purpose of our algorithm is to detect mirrors in a 2D indoor environment. The channel with -1 degree FOV is selected, because this is the most effective channel to scan the mirror image of the robot. We realize mirror detection algorithm with the MATLAB and integrate it in the FastSLAM algorithm. The real data acquired by the laser are shown in Figure 7, and the map of the room reconstructed by the FastSLAM algorithm is shown in Figure 8.
3.2. Experiment Result
Mirror detection algorithm used the location of the robot estimated by SLAM and calculated all the symmetry axes of robot and the other obstacles scanned at each time. As shown in Figure 9, the diamond indicates the location of robot and the map updates along with the movement of the robot by RBPF SLAM.
With the movement of the robot, the laser sensor will scan real-time information of the surrounding environment. SLAM algorithm will generate and update the map information constantly. The process of updating a map is shown in Figure 10. When the robot moves to a new position, the laser sensor will obtain a set of new points representing the environment. Mirror detection algorithm will update probability of all the symmetry axes according to the information.
In order to prove the effectiveness of the algorithm in a variety of situations, we made our robot move in different directions towards the mirror. In any case, our algorithm can find the location of the mirror accurately and efficiently.
In the proposed algorithm, we gradually remove the symmetry axes with small probability to be a mirror. Usually, if a mirror image of the robot cannot be found in the new time or the mirror image of the previous time does not disappear, the probability of the symmetry axis is a mirror will decrease. These symmetry axes will be eliminated gradually. Finally, if a mirror exists in the room, asymmetry axis will stay at the position of the mirror located.
When the location of mirror is determined, the algorithm will review the moving process of the robot. If successive symmetry points of the robot are found, the location where the first symmetry point occurred is the starting point and the last one is the ending point. Then, the edge of the mirror is confirmed.
3.2.1. Case One
As shown in Figure 6(a), this is the most common situation and the mirror is hung on the wall. The mirror and the wall are on one plane. The robot needs to distinguish the mirror from the gap between the walls. We put the mirror in a room with a relatively complex environment. In Figure 6(a), the artificial wall and some obstacles are placed in the room, which will create lots of gaps, and the robot is asked to move toward the mirror from different directions. When the robot moves parallel to the mirror, the result is as shown in Figure 11. When the robot moves towards the mirror with a certain angle, the result is as shown in Figure 12. As we can see from the result, no matter where the robot is or the moving direction is, only one symmetry axis could be detected finally. This is where the mirror is located.
After the symmetry axis of the mirror is located, the starting and ending point of the mirror could be found about the symmetric axis. As shown in Figure 13, the starting point could be found when the mirror image first appears about the symmetric axis and the ending point is the position where the mirror image last appears about the symmetric axis.
In this experiment, we completed the detection of a mirror with a frame in a more complicated environment. A different detection method is proposed in . Although we both used the idea of mirror symmetry in the experiment, the specific methods are different. The algorithm in  first finds the edge of mirror, that is, it finds the two endpoints first and then calculates the coordinates of the object symmetry point between the two endpoints in the environment. Next, it determines whether the coordinates of the obtained symmetric points exist in the real environment. If the points that satisfy the condition is more than 50%, the algorithm regards the position as mirror. The limitation is when it comes to mirrors without the frame, the first step in the algorithm is to find the edge of the mirror that will be not held and also the mirror that cannot be detected.
3.2.2. Case Two
As shown in Figure 6(b), if a mirror is placed in the hall with nothing connected, the algorithm needs to detect the mirror when the robot goes through. Because there is no reference object beside the mirror, the key point is whether the robot can find the mirror quickly. As is shown in Figure 14, the robot is asked to move parallel to the mirror. When the robot moves toward the mirror at some angle and perpendicular to the mirror, the results are shown in Figures 15 and 116, respectively. As we can see, the symmetric axis will finally be detected in any circumstances. When the symmetry axis of the mirror is located, we can traverse the path of the robot moving to find the boundary of the mirror.
3.2.3. Case Three
In order to further prove the validity of our algorithm, we make the environment more complex and place two mirrors facing each other in the corridor as shown in Figure 6(c). Two mirrors may have a problem of imaging with each other, this may cause detection failure. The results are shown in Figure 17, because the nature of mirror image’s appearance and disappearance will not change, two mirrors could be detected successfully.
3.3. Comparative Results
Compared with existing methods, the contribution of this paper is that the mirror inspection task can be finished by using laser radar without adding any hardware. Moreover, there is no limitation on the location of the mirror. At the same time, this method can realize multimirror detection. As we can see in Table 1, only our method and Yang’s methods [4, 5] can detect mirror. Our method and reference  do not need additional hardware, which will make the structure of the equipment simpler, more convenient to use, and much cheaper. Compared with reference , our method has no limitation on the mirror position. Meanwhile, our method can accomplish multimirror detection, which means our method has fewer limitation on use and wider application range.
In this paper, we proposed a mirror detection algorithm by using symmetry between the robot and its mirror image. At first, our algorithm will find all the symmetry axes of the robot and other obstacles at any moment and update the probability that the symmetry axis is a mirror according to the status of the robot’s mirror image with the movement of robot. Whether there is a mirror, it is determined by the existence of the symmetry image in adjacent moment. The mirror’s boundary is confirmed by traversing the path of the robot moving, and the starting and ending points of the mirror are also determined. The experiment result shows that our algorithm can detect the mirror effectively, regardless of the indoor environment and the position of the mirror.
This method we proposed can detect the mirror with a frame, and it also successfully detects the mirror without frame. In addition, two mirrors can be detected by the proposed method. We integrate out algorithm in FastSLAM, which can complete the mirror detection while the map is constructed.
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
This work was supported by the National Key Research and Development Program (no. 2020YFC0811004), High Level Talent Scientific Research Start-Up Fund (no. 107051360021XN090/001), and 2019 Beijing Basic Scientific Research Business Innovation Team Project (no. 110052971921/003).
S. Park and G. Lee, “Mapping and localization of cooperative robots by ROS and SLAM in unknown working area,” in Proceedings of the 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 858–861, Kanazawa, Japan, November 2017.View at: Publisher Site | Google Scholar
X. Chen, H. Zhang, H. Lu, J. Xiao, Q. Qiu, and Y. Li, “Robust SLAM system based on monocular vision and LiDAR for robotic urban search and rescue,” in Proceedings of the IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), pp. 41–47, Shanghai, China, October 2017.View at: Publisher Site | Google Scholar
S. W. Yang and C. C. Wang, “Dealing with Laser Scanner Failure: Mirrors and Windows,” in Proceedings of the 2008 IEEE International Conference on Robotics and Automation (ICRA), Pasadena, CA, June, May 2008.View at: Google Scholar
A. Tatoglu and K. Pochiraju, “Point Cloud Segmentation with LIDAR Reflection Intensity Behavior,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 786–790, Saint Paul, MN, USA, May 2012.View at: Google Scholar
S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics, MIT, Cambridge, USA, 2005.
Z. Jiang, W. Zhou, H. Li, Y. Mo, W. Ni, and Q. Huang, “A new kind of accurate calibration method for robotic kinematic parameters based on the extended kalman and particle filter algorithm,” IEEE Transactions on Industrial Electronics, vol. 65, no. 4, pp. 3337–3345, 2018.View at: Publisher Site | Google Scholar
C.-Y. Sun, J. Hou, W. Sun, and B. Jiacheng, “An Adaptive Intelligent Particle Filter for State Estimation,” in Proceedings of the 36th Chinese Control Conference (CCC), Dalian, China, July 2017.View at: Google Scholar
M. Q. Liu, H. Zhicheng, F. Zhen, Z. Senlin, and H. Yan, “Infrared Dim Target Detection and Tracking Based on Particle Filter,” in Proceedings of the 36th Chinese Control Conference (CCC), Dalian, China, July 2017.View at: Google Scholar