Abstract

The rapid development of intelligent control technology has improved the functions of service robots oriented to the home environment, and the functional requirements of family members for service robots have also been upgraded from simply liberating hands and reducing housework to emotional communication and intelligent escort. Based on the Internet of Things and fuzzy control technology, this paper builds a home robot control system and gives a brief overview around the mechanical structure design of the home service robot, mainly focusing on the core control system and global path planning methods. Moreover, this paper adopts the control system structure that combines the upper computer and the bottom motion controller and combines it with simple and practical system software, so the system stability is high. Finally, this paper verifies the performance of the system constructed in this paper through experimental research. The research results show that the system constructed in this paper has certain practical effects.

1. Introduction

Since the development of robotics technology, it has been widely used in all fields of society; the technological level is constantly being updated with each passing day, and the degree of intelligence of robots is getting higher and higher. It has been developed from the completion of simple designated actions through simple code instructions to the application of multisensor fusion technologies such as touch, vision, and voice for autonomous behavior planning and from the completion of a single designated task to autonomous recognition of human-computer interaction that incorporates intelligent control algorithms. Taking the early education robot as an example, the smart robot can move autonomously in an indoor environment and locate and track the position of infants and young children in real time, so that parents can grasp the safety of infants and young children at any time. Moreover, it combines an embedded controller with powerful computing capability, which can not only realize rapid and real-time obstacle avoidance of random obstacles in complex indoor environments but also smoothly communicate with infants and young children. Therefore, it plays a role in early education to accompany the child in the absence of parents, and to a certain extent, it shares the life pressure of young couples [1].

Artificial intelligence instead of artificial manufacturing and production has always been the desire of mankind. However, the robot with the highest level of intelligence in the world currently has very limited adaptability to changes in the external environment. There is still a big gap between human beings and the expected desires, which deeply affects the promotion and application of modern artificial intelligence robots. Among the many factors, the more important reason is that artificial intelligence still has great shortcomings in its ability to perceive the outside world. In order to better solve this problem, various external sensors have been to be added to the robot and smart home sensors are a very important link. In the technical research of robots, the smart home is a very valuable research field and the study of wireless-based smart home robots is a very hot topic in recent years [2]. The smart home robot is a robot that completes a certain task for people within the scope of the family, and it is mainly engaged in home service, elderly care, and children's education [3]. Moreover, smart home robots have a well-developed “brain” that can rely on various sensors to understand human language, talk to users, and execute user commands.

The main function of the home service robot is to provide users with security, voice assistance, housekeeping, home appliance control, and other services. Service robots contain a large number of cutting-edge technologies, the core of which is autonomous navigation, voice recognition, multimode human-computer interaction, multi-degree-of-freedom dexterous operation, human emotion, and motion perception understanding [4]. Based on the family, the smart home uses integrated wiring, network communication, control, and other technologies to integrate related equipment into a whole set-up to form a comprehensive management system for residential equipment and family affairs [5]. In recent years, the integration of service robots and smart homes has become closer. Service robots can not only help users to complete housework but, as a medium, they can also be linked with smart home control systems to assist users in managing residential equipment and household affairs, thereby making people's home lives more intelligent.

This article combines the Internet of Things technology and fuzzy control technology to construct the home robot control system and carries out the actual analysis in combination with the actual needs of home life services.

The development of smart home robots to this day is inseparable from people's unremitting efforts. Although robots cannot enter thousands of households, the era of smart home robots is on the way. The study in [6] put forward the idea of using parallel mechanisms on the robot manipulator and finally achieved success and was widely used in industrial production. The predecessor of the smart home robot is this kind of organization. The study in [7] found that assembly robots can use parallel mechanisms and achieved good results, which further promoted the development of robots. Therefore, the first truly parallel robot was manufactured. The study in [8] used a camera to determine the traveling problems encountered by the robot during the traveling process.

With the improvement of the level of technology, the research of smart home robots has achieved very great results. Nowadays, human exploration of the deep space of the universe has taken a big step in human history. The mobile robots used in Mars exploration have already applied smart home robots and autonomous positioning technology [9]. Smart home robots and positioning research are changing human lives little by little. At present, in order to solve manual problems, autonomy has become the mainstream technology in any field, such as autonomous robots used in factory floors, automated detection robots used in resource detection, and smart home robots and positioning research used in unmanned cars. The method of using object feature extraction and object detection to obtain the motion parameters and position of the object in space is of great significance in computer vision, object positioning and tracking, and target recognition [10]. In recent years, it has received more and more attention from technical researchers, and related technologies will have more in-depth research and development [11].

For the research of indoor positioning and navigation, the study in [12] proposed a vehicle-mounted vision SLAM algorithm based on the mobile robot system. The algorithm introduces 2.5D local maps, which can be directly used for fast obstacle avoidance and local path planning and can construct 2D raster maps. However, this method loses the 3D information of the environment and cannot fully reproduce the environmental structure. The study in [13] proposed a SLAM algorithm based on automatic detection and object labeling technology. The algorithm helps the robot recognize objects without relying on any prior information and can insert landmarks in the grid map so that the algorithm has a certain improvement in positioning and navigation accuracy compared with traditional pure visual SLAM. However, the algorithm also cannot construct a complete 3D model, and due to the existence of road signs, it will cause a certain environmental structure loss. Since Microsoft introduced the Kinect depth camera, it has greatly reduced the implementation cost of SLAM systems based on depth camera sensors. The study in [14] proposed RGB-D SLAM. The main method is to use the depth camera to obtain the SIFT features with depth information, then estimate the matching relationship between the images, use the ICP (iterative closest point) algorithm or the graph optimizer g2o to optimize the point cloud pose transformation, and finally get a global consistency map. However, this algorithm has poor real-time performance due to the slower SIFT feature and is limited by the measurement range of the depth camera, so RGB-SLAM cannot be applied to large-scale indoor spaces. The study in [15] proposed and open-sourced the ORB-SLAM feature point method, which is one of the monocular vision SLAM systems. ORB-SLAM is developed based on the PTAM framework, and most of the components have been improved. The detection and closure mechanism of the loop are added to eliminate the accumulation of errors and achieve more accurate positioning and map construction. The study in [16] proposed ORB-SLAM2, which adds support for calibrated binocular and RGB-D cameras compared to ORB-SLAM. At the same time, the feature extraction and matching methods are improved and updated and the implementation code is clearer. However, PTAM, ORB-SLAM, and ORB-SLAM2 are all feature-based SLAM systems. The global map constructed is a sparse feature point map, so it is difficult to use for robot navigation. Concerned about the stability of the visual SLAM system in a special environment, the study in [17] proposed a tightly coupled sensor fusion method based on ORB-SLAM2. This method combines image data with odometer data to improve the tracking status of the system in the absence of features. However, although this work has improved the robustness of ORB-SLAM2, it has greatly increased the occupancy of robot computing resources and the improved algorithm is mainly aimed at handling unexpected situations but does not improve the accuracy and performance of the system under normal conditions. The study in [18] proposed an extended ORB-SLAM2 algorithm based on the standard spherical camera model. This model enables the system to capture wide-view scenes through a fisheye camera, thereby improving the robustness of the system. And, this research proposes a semidense feature matching algorithm, which can use the high-gradient area of the image to construct a semidense map. However, this algorithm is not suitable for large-scale scenes, and although it can construct semidense maps, the study did not further explore the use of map information.

With the rapid rise of deep learning technology in recent years [19], the exploration of applying deep learning to visual SLAM is constantly being proposed. The study in [20] proposed a CNN-SLAM method, which uses a CNN training model to replace the feature point extraction and matching. Compared with the traditional interframe estimation algorithm, the learning-based algorithm is simple and intuitive and the online operation speed is faster. However, this method relies too much on the dataset, and the combination of deep learning and SLAM is still in the preliminary exploration stage, so the differences between various algorithms are large.

3. Robot Positioning

Only when the navigation control system knows the positioning coordinates of the environment where the main body is located, it can ensure the timeliness of the generated path and the environmental information fitting. Therefore, environmental positioning is the premise of the robot's indoor navigation and path planning.

Based on the characteristics of environmental scanning information generated by a two-dimensional laser sensor, this paper constructs a multidimensional data space to reduce the interference of environmental information white noise, uses hypothesis testing theory to extract and estimate environmental information point features, and combines the method of constructing a probability model. The line features are estimated and screened, and the least-squares method is used to fit the accurate line segment information, which provides an accurate environmental information model for the robot's pose update.

3.1. Feature Recognition

The laser sensor scans the environmental information in real time, and matching the known environmental map is the prerequisite for the robot to achieve precise positioning. At the same time, it reduces the accumulated displacement error of the mobile chassis odometer model. Therefore, how to quickly and accurately extract the indoor environment features is the basis of construction of the environmental map. The first problem to be solved by the robot is white positioning. The indoor home environment is relatively simple. In addition to the common basic components such as walls, corners, and doors, there are other irregular items purchased by the homeowner such as sofas, floor-standing air conditioners, and cabinets. To facilitate the extraction of environmental information, object edge feature extraction line segments can represent all features and irregular object edges that are not easy to describe can be uniformly divided and fitted according to point features. It can be seen that the point feature and line feature extraction denote the primary problem of the environmental feature extraction.

In order to restore the environmental information to the greatest extent, firstly, a data space is constructed mostly to meet the requirements of a wide-area data collection. Due to the existence of environmental information noise and the requirements of data processing speed, the downsampling image feature processing method is adopted to simplify the environmental data information, reduce the difficulty of collecting useful feature information, and reduce the computational difficulty of the algorithm in the multidimensional data space; the Gaussian white noise that appears during the image information collection can also be efficiently filtered by the mathematical statistics method of data dimensionality reduction and the same object scanning point information clustering except.

The data is scanned using the two-dimensional laser sensor, and the environmental feature extraction flowchart is shown in Figure 1. In the space polar coordinate system, it can be expressed as [21]

The scan data sample space is infinite, represented as N data in the form of sampling points, and the Gaussian white noise in the initial scan data is represented as

3.2. Estimation of Indoor Environmental Features

In the n-dimensional sample space, the feature estimation of a specific sampling point needs to be determined by the feature of the sample point in the left and right neighborhoods of the point. For indoor environments where obstacle information is relatively simple, the features of the left and right neighborhoods of a single sampling point can be briefly abstracted into four types: line segments, curves, points, and unknown features. The algorithm intends to estimate line segments on the left and right neighborhoods of the sampling point and adopts hypothesis testing to analyze whether the difference between the sample point and the sample is caused by sampling error or there is an essential difference, so as to verify the feature information of the sampling point.

In the triangle ABC shown in Figure 2, if, then .

From the angle bisector theorem, we can see that the ratio of the opposite side line segments is equal to the ratio of the corresponding adjacent sides, namely [22],

By applying the law of cosines to and , respectively, we obtain

In addition,

In summary, we obtain

Therefore, the following results are obtained as follows:

In Figure 3, represents the polar diameter of the sampling point i, represents the estimated value of the polar diameter of the sampling point j in the left and right neighborhoods of the sampling point i, and represents the polar diameter of the sampling point 2j in the left and right neighborhoods of the sampling point i.

According to the properties of the angle bisector in polar coordinates, the estimated value of the polar diameter of the sampling point j in the left and right neighborhoods of the sampling point i can be obtained as follows:

In a line segment environment, is the unbiased estimate of , so there is . That is, the polar diameter of the sample point in the left and right neighborhoods of the sample point i satisfies the condition of independent and identical distributions in the standard normal distribution. According to , the statistical characteristics of the distribution [23] are as follows:

Among them, the points in the receptive field are the feature points of the line segment. The point in the rejection domain is a point feature or other environment feature.

The sample point in the left and right neighborhoods of the sample point i is the sample of the population , and there are [24]

Among them, is the sample variance and n is the number of points in the left or right neighborhood of the i-th sampling point.

The sample mean is

In the left and right neighborhoods of sampling point i, the closer and farther sample points have different effects on the sampling point i. Therefore, the probability function of the influence degree is defined as a normal distribution model related to the distance, namely,

Here, is the feature judgment weight of the sample point j to the sample point i (weight of influencing factors); is the number of sample points in the interval of sample points i and j; is the polar diameter affected by the sample point i; and is the affected standard deviation. and can be given according to the accuracy of feature extraction.

The sample variance is

The selection statistic is

Since the sample statistic is the unbiased inference of , is approximately 1 in the confidence interval of the acceptance domain . Therefore, the sample point is the neighborhood of n, and formula (14) is approximately equal to .

That is, the acceptance domain is

The rejection domain is

The point feature expression is , where is the polar diameter of the sampling point and is the polar angle of the sampling point.

In order to verify whether the sampling point is a point feature, the distribution characteristics of the remaining sample points in the neighborhood of the extreme point can be checked for judgment. The noise in the original scan data, when its instantaneous value obeys the Gaussian distribution and the power spectral density is uniformly distributed, is judged to be the Gaussian white noise, and the distribution function is . If the extreme value is on a straight line, it can be regarded as in its small neighborhood. According to the hypothesis test, there are

Therefore, to verify the characteristics of the sampling points, it is necessary to meet the requirement of the upper difference of the rejection domain .

In the small neighborhood of the extreme point , the polar diameter value of the remaining scan sample points relative to the extreme point is a sample of the population and the sample variance and the sample mean satisfy the following equation:

Among them, .

The sample mean is

The sample variance is

The selection statistics are

Because is the unbiased estimate of , is approximately 1 when holds. For a fixed neighborhood (the number of sampling points is ), formula (21) is approximately . Therefore, the rejection domain is

Sampling points that meet the requirements of the deviation degree between the actual observation value and the estimated value of the feature point defined by the rejection domain are regarded as point features, denoted by .

We assume that the line segment feature is denoted as . Among them, d is the distance between the sampling line segment and the laser sensor in the basic coordinate system. In order to facilitate the calculation, the installation position of the laser sensor is selected as the origin of the coordinate. is the angle between d and the direction of the basic coordinate system. and , respectively, represent the starting point and end point of the detection line segment, and the position equation of the line segment in the basic coordinate system is

Among them, .

4. Home Robot Control System Based on the Internet of Things and Fuzzy Control

According to the analysis of system requirements, the design of this system is that users give orders to smart home robots and the robots use wireless signals to network various indoor terminal devices to achieve human body infrared sensing, light control, and control functions of electric curtains and other household appliances. The smart home robot model is shown in Figure 4.

This article uses the Internet of Things technology to construct the home robot control system. In the choice of network structure, this paper uses the ZigBee wireless network. The topological structures commonly used in ZigBee wireless networks include star-shaped networks, mesh-shaped networks, and tree-shaped networks. These network topologies are based on the structure of the coordinator, routing nodes, and terminal nodes, as shown in Figure 5.

The star topology is composed of the central node and the nodes connected to the central node, and the shape is like a star; the mesh topology is composed of multiple links of connection points to form an irregular mesh; the tree topology looks like an upside-down tree. In the tree, the top node is the root node, multiple branch nodes are connected below the root node, and each branch node can also be connected to multiple branch nodes. This article chooses a relatively simple star topology.

Figure 6 shows the hardware network diagram of the communication subsystem of the smart home robot. The coordinator and router are the center of the entire network. The coordinator, router, and terminal node constitute the hardware system of the entire smart home subsystem. There must be only one coordinator in each network. The main function of the coordinator is to establish a network, assign network addresses, maintain binding tables, etc.; routers are optional, and there can be one or more routers in a network or no routers.

In the ZigBee wireless network, there are two ways of data transmission: beacon-enabled data transmission and nonbeacon-enabled data transmission. In a beacon-enabled network, a beacon is a special data frame in the network to indicate whether there is data to be sent in the network. In nonbeacon-enabled networks, there is no such data frame with special meaning. In the beacon-enabled network, the terminal device needs to transmit data to the coordinator. The coordinator first detects the beacon information, and then the terminal device transmits the data. After the coordinator receives the data, it sends a response frame. The data transmission is then completed.

In a nonbeacon-enabled network, the terminal device first sends a data request to the coordinator, the coordinator responds to the request, and the terminal is set to send data. After the data transfer is over, the coordinator sends a response frame, as shown in Figure 7.

The program also needs to be able to define the specific attributes of the node. The main program defines whether this node is a coordinator node or a terminal node. Figure 8 shows the running process of the main program of the smart home robot communication subsystem.

After the main program of the system completes the initialization of the entire system, the main program starts to initialize the coordinator node in the wireless network. In the initialization process of the coordinator node, first the CC2530 radio frequency chip is initialized, then the ZigBee protocol stack is initialized, and the interruption of the CC2530 chip is turned on. Finally, the formatting of the network is started. If the system formatted the network successfully, you can see the LED light on the coordinator that represents the successful networking and, at the same time, it will also display on the serial port that a wireless network has been successfully created. After the wireless network is successfully created, the main program will enter the application layer. At the same time, the main program will also start to detect whether there is a ZigBee wireless signal in the air. If there is a working terminal node in the signal coverage of the coordinator at this time and the signal frequency transmitted by the terminal node is the same as the frequency of the signal of the coordinator, then the terminal node will apply to join the coordinator. If the terminal node can join the wireless network, the coordinator node will start to receive data and send the data to the upper computer through the asynchronous serial port, and the upper computer will also show that the connection is successful. Finally, the main program runs to the terminal node. Similar to the operation at the coordinator node, the main program first initializes the CC2530 radio frequency chip in the terminal node and starts to supply power to the temperature and humidity sensor. After completing these two steps, the main program will initialize the protocol stack again and then the terminal node will start to work. The terminal node sends a request to join the network, and the coordinator responds to the request.

Figure 9 is a working flowchart of the ZigBee coordinator. After the coordinator and the terminal node are successfully networked, they can analyze the serial port commands to determine the temperature query, infrared query, gas query, and curtain query, then send the response message, and finally encapsulate the data packet and send it to the LCD.

5. Performance Testing of the Home Robot Control System Based on the Internet of Things and Fuzzy Control

This article combines the actual needs of home life to construct a home robot control system based on the Internet of Things and fuzzy control. On this basis, the system conducts performance verification. First of all, this paper uses the commonly used household language to detect the speech recognition technology of home robots. A total of 66 sets of tests are set up, and these 66 sets of tests are numbered. The test results obtained are shown in Table 1 and Figure 10.

Through the analysis, it can be known that the home robot control system based on the Internet of Things and fuzzy control constructed in this paper gradually shows improvement in the accuracy of speech recognition as the number of training processes increases, which meets the needs of the use of home robots. On this basis, the practical performance of the home robot control system is verified and its home use effect is confirmed. The results are shown in Table 2 and Figure 11.

From the above analysis results, it can be seen that the home robot control system based on the Internet of Things and fuzzy control constructed in this paper has certain practical effects.

6. Conclusion

The rapid development of intelligent control technology has improved the functions of service robots oriented to the family environment, and the functional requirements of family members for service robots have also been upgraded from simply liberating hands and reducing housework to emotional communication and intelligent escort.

Based on the current technical conditions of the laboratory, this paper gives a brief overview of the mechanical structure design of the home service robot, which mainly focuses on the core control system and the global path planning method. The robot adopts a wheeled differential mobile platform with high work efficiency and simple motion control. The human-like torso and mechanical arm design are easier to be accepted by the service objects and they are flexible and reliable, and the redundant freedom design can meet a variety of grasping and handling tasks. The global path planning technology integrated with the intelligent control algorithm reduces the optimal path drawing time under the premise of ensuring the accuracy of the environmental map. Moreover, it has a control system structure that combines a highly reliable and easily expandable upper computer with a bottom-level motion controller, and it has simple and practical system software, so the system has high stability and the operation is simple and easy to understand.

Data Availability

The labeled datasets used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This study was sponsored by the Key Problems in Science and Technology of Henan Provincial, Research on Improved Routing Spectrum Allocation Algorithm Based on Elastic Optical Network EON, China (Grant no. 212102210560).