Edge computing is an important foundation for building 5G networks, but in my country, there are few applications or inventions based on edge computing. In order to improve the application of edge computing, this article innovatively designs a human behavior recognition system based on a patent perspective, which provides a reference for other researchers. This paper discusses and designs the software and hardware schemes and related communication methods of a new edge computing framework that combines edge devices and cloud computing centers. After processing the collected human behavior data, the behaviors of the corresponding monitoring objects are classified and modeled, and then the distributed computing of edge devices is used to modify these models. These systems are characterized by low energy consumption and fast response. The experimental results prove. The recognition efficiency of edge computing technology from the patent perspective has been greatly improved. Its recognition speed is more than 30% faster than other algorithm calculations, and the accuracy of recognition reaches 0.852, which is about 20% higher than traditional recognition. The authors show that edge computing technology based on a patent perspective can play an important role in our lives.

1. Introduction

With the rapid development of electronic, information, and communication technologies such as the Internet of Things, 5G, blockchain, and sensors, the growth of various types of data has shown an exponential trend, and the requirements of massive data on computing power and speed are also increasing [1, 2]. Cloud computing technology provides users with almost unlimited computing power through a large number of high-performance servers in the data center. It is one of the important solutions for big data analysis and processing. However, cloud computing also has a set of issues such as high network latency, high cost, security and privacy, and cannot adapt to all big data analytics and processing requirements. For example, in industrial production, operation, and other scenarios, real-time response to accidents, failures, and emergencies is very important; in network data capture scenarios, data transmission costs are more sensitive [3].

For this reason, edge data processing technology with edge computing as the core has been produced and widely promoted [4]. Edge computing is defined as a distributed open platform that integrates core capabilities of network, computing, storage, and application on the edge of the network close to the source of things or data [5, 6]. In fact, edge computing is a new ecological model. By converging five types of resources, including network, computing, storage, application, and intelligence, at the edge of the network, it can improve network service performance and open network control capabilities, thereby inspiring something similar to the mobile Internet [5].

For machine learning edge computing clone node recognition, experts at home and abroad also have many studies [7]. In foreign countries, TangX and ChansonST proposed an optimal task download allocation strategy based on the study of cloud and fog combination. Through experimental simulation, under the condition of service delay constraint, by sacrificing a small amount of computing resources, it is possible to save communication bandwidth and reduce network delay, so that the energy consumption of the cloud can be reduced to a minimum [8]. DimokasN proposes a PCICC cache strategy according to the characteristics of cache nodes [9]. RabinovichM has studied the difficulties and challenges facing cloud and fog fusion services, sensor network technology, and network virtualization technology of cloud computing and fog computing [10]. Due to the late start of computers in China, there is little research on edge computing. Xue Heyu believes that with the popularization of artificial intelligence applications, traditional identification systems are vulnerable to unique infringements [11]. He believes that edge computing can be used in traditional cloud server systems. Introducing edge computing in this paper reduces the number of communications between the server and the user and improves the security performance of the system [12]. Jie believes that edge computing is close to computing nodes, which makes it face great challenges. In particular, cloned nodes are difficult to identify because other legitimate nodes have the same information. Therefore, they believe that they must be judged by cloning nodes based on channel information. The edge computing of the network improves the recognition accuracy [13]. These studies have a certain reference value for this article, but due to the narrow data cited in the research, the data industry is basically limited to individual industries, and it is difficult to play a universal role.

Based on edge computing innovation from the perspective of patents, this paper conducts research and analysis on cloud computing and big data and proposes a data acquisition and processing system architecture based on edge computing, which uses edge computing close to users to provide low latency and high processing capabilities. Data acquisition, processing, and analysis plan. The distributed computing of edge devices is used to modify these models, thereby realizing a human behavior recognition system with high efficiency, low energy consumption, and fast response, which verifies the applicability of the current computing framework to physical data processing.

2. Innovative Methods of Edge Computing Technology

2.1. Patent

The new century is the era of the knowledge economy. Whether an enterprise can survive and stand out in the fierce market competition depends more on its scientific and technological innovation capabilities and effective management and application of independent intellectual property rights [14]. Innovation is the driving force for the development of an enterprise, and only an enterprise that continues to innovate can remain invincible in the competition. With the increase of the complexity of innovation and the limitation of resources and corporate capabilities, corporate innovation has become more demanding for capital investment and accumulation [15]. How to consolidate and leverage existing resources, transform them into innovation results and corporate economic benefits, and form a company’s competitive advantage to adapt to an era of accelerating product upgrades are especially important [16].

The ability of an enterprise to create and apply intellectual property rights is the key to enhancing the core competitiveness of an enterprise, and it is also an inevitable choice for an enterprise to adapt to the ever-changing market environment. Enterprises, especially high-tech enterprises, are the main force in the creation of intellectual property rights. Enterprises have produced a large number of patent achievements in the process of actively participating in scientific research and market activities. Only by combining patents and the market environment and maximizing the results of these patents will we continuously stimulate the creative enthusiasm of stakeholders and continuously improve the technical level and market competitiveness of the company [17].

Patent represents intellectual property and is the main source of technical information, containing 90%-95% of the world’s scientific and technological information. Patent is a common indicator to measure the output of enterprise innovation activities. The patent data of various enterprises is relatively easy to obtain, which can more objectively reflect the innovation activities of enterprises [18].

Patent portfolio theory uses patent activity and patent quality as an evaluation index for corporate patents. Taking the number of patent applications as an indicator of patent activity in a company’s patent portfolio, it is believed that companies with a large number of patent applications are more innovative and pioneering, and their corporate value is also higher. The number of patent applications not only reflects the number of patents of a company but also reflects the degree of activity of the company’s innovation. The more the company’s annual patent applications, the stronger the innovation activities of the company [19]. There is a certain degree of difference between patent quality and patent value, and the two cannot be equated. The difference is mainly reflected in the following two aspects: (1) The difference in subjectivity and objectivity: the quality of patents depends on the advancement and importance of patents in the same field, which mainly reflects the creativity and novelty of patents, and the judgment results are more objective. (2) Value manifestation process: the quality of patents reflects the practical problems solved by patents, and whether they are implemented or not does not affect the objective facts of whether they can solve practical problems, while the value of patents is the economics embodied after implementation. Benefit [20].

Generally, we can use the interactive items of innovation ability and patent quantity and patent quality to examine the role of patent quantity and patent quality in the process of affecting enterprise performance. Construct the following measurement model:

Among them, is a constant term; represents the regression coefficient of the respective variable and the control variable, which is used to express the influence of the independent variable and the control variable on the company’s performance. The industry selection function can be used to calculate the value corresponding to each observation value, which is used to represent: where and are, respectively, the probability density function and probability cumulative distribution function corresponding to the model. The following model is used to estimate the factors affecting corporate performance, namely,

Among them, represents a series of explanatory variables and control variables that affect corporate performance; is the estimated value of the coefficient of variable ; is the value obtained in the first stage. Explore the role of patent quantity and patent quality in the process of impacting a company’s performance using interactive items of innovation capacity, patent quantity, and patent quality. Construct the following measurement model:

Among them, is the coefficient of the adjustment variable. In order to avoid the influence of inconsistent dimensions on the statistical results, the data was standardized in the preresearch:

Among them, represents the normalized observation value; represents the original data observation value; is the average value of the original data ; represents the sample standard deviation.

2.2. Edge Computing

Edge computing may have the richer and more complex characteristics of other systems. Its theoretical research models and methods have their own applicable scopes. The part and the whole are no longer unified, and they do not satisfy the superposition principle of linear systems. Even for other known systems, it is quite difficult to model and control them. When the structure of the controlled object is completely unknown, system research will become more complicated. This also makes edge computing system model identification and control become a hot topic in the current control field [21, 22]. In recent years, artificial intelligence has been widely used in the identification of black-box systems. Among them, the regional economy has attracted much attention. Its unique edge computing capabilities have brought vitality to the modeling of nonlinear systems.

In the process of researching real-time data services, it is not difficult to find that data communication is inseparable from real-time data collection, real-time data control, or real-time data transmission. Improper selection of communication methods can cause data delay or loss. Therefore, the choice of communication mode and communication protocol is particularly important. For network data, the edge device itself needs to access the target webpage through the network module, download the required data, and complete the data collection task [23]. The edge devices connected to the Internet and the cloud computing center complete data exchange through wireless transmission, so other devices are not needed for data collection [24]. The frame diagram of the system is shown in Figure 1.

Edge computing provides services to users locally. On the one hand, it can reduce service processing delays and improve work efficiency. On the other hand, it can reduce network and bandwidth requirements and save system overhead. Compared with other computing, edge computing has great advantages in response time and service quality, and it meets the requirements of low latency, high reliability, and security [25]. In addition, as a complement to other calculations, edge computing reduces pressure on the data center, reduces bandwidth requirements, balances data processing, and improves overall system efficiency. In recent years, with the rapid development of the Internet of Things, edge computing has been widely used in various fields, such as the Internet of Vehicles, wireless sensors and actuators, smart homes, and software-defined networks [26]. In the future development, edge computing will complement and integrate with other computing and be widely used in more industries and fields, providing an ideal software and hardware support platform for information processing in the Internet of Things era [27].

The real-time data service architecture of edge computing divides real-time data services into edge computing services and cloud application services. Edge computing services undertake the main work of real-time data services. Cloud application services, as the data receiver of edge computing services, mainly provide web services. There are two services with database services. The development of edge computing has promoted the development of big data, cloud computing, and informatization. Now, it has involved many fields such as medical care, agriculture, geological survey, astronomy, and the Internet of Things. It has even developed into the fields of news and e-government. The huge value contained in massive data brings new development opportunities for each field [28]. However, the generation of massive data also brings huge challenges to data processing. It not only requires strong computing and analysis capabilities but also requires a large storage space to store the data. This will undoubtedly cause excessive pressure on the computing center. Edge computing solves this problem well. Generally speaking, the algorithm of edge computing is as follows: where is the mean square error. It is achieved by convolution of smoothing kernels with different values with the image. The resulting expression formula is as follows:

The effect of edge detection is related to the value of ; the smaller the , the smaller the smoothing effect, and the more noise.



The mathematical morphology method uses set algebra theory to analyze and process based on geometric characteristics. Mathematical morphology methods mainly use corrosion and expansion operations to extract morphological boundaries. Through the contraction effect of calculation and the expansion effect of expansion calculation, combined with certain logical calculations, a more precise boundary can be obtained. In order to obtain better edge computing nodes, the fitness function is determined according to the idea of the maximum between-class variance method, and the formula is as follows:

Among them, is the threshold, is the fitness function, is the number of nodes less than the threshold, and is the number of nodes greater than the threshold. Generate a random number in the interval, and select the individual corresponding to the area where the random number belongs.

2.3. Data Collection

Complete data analysis and collect data through algorithms such as machine learning and artificial intelligence; minimize the deviation of the original data; correct the data to obtain more accurate data; establish a distributed edge computing and cloud computing collaborative big data analysis and mining platform to achieve data construction, the ability of the module [29]; the interaction between the functional modules is relatively simple, usually a unified software system completes the commands and scheduling, establishes long-term stable communication, and ensures the reasonable operation of the entire system [30, 31].

Aiming at the scene of human behavior recognition, a human behavior data collection, processing, and analysis system based on edge computing is designed and implemented. Using the distributed computing of edge devices to modify these models, a human behavior recognition system with high efficiency, low energy consumption, and fast response is realized. After data collection and data preprocessing, a model that can be directly modeled is obtained. According to the mining target and data form, models such as classification and prediction, cluster analysis, and association rules can be established to help system users extract the value contained in the data. In order to be able to complete the support of the above-mentioned hardware overall design and achieve the corresponding computing capabilities, it is necessary to select a suitable microprocessor as the core part of the overall system-edge computing equipment [4].

The model is sent to the edge device, and the edge device combines the data received in real time to verify the basic model. When the user or the edge device itself determines that the results produced during the use of the model do not conform to the actual situation, the model will be returned to the cloud computing center. The cloud computing center uses the newly received data and feedback results to modify the model, then sends the retrained model to the edge device for another test, and loops until the test result of the model meets the requirements of the edge device, and the final generated. The model will be used normally on edge devices [32].

Finally, the overall functional architecture of the system can be obtained, which consists of three parts: data acquisition module, data preprocessing module, and data analysis module. The overall operation process is shown in Figure 2. This approach of putting the analysis model on the edge device can reduce the computing pressure on the cloud on the one hand, and on the other hand, for the delay-sensitive scenarios, the edge device to complete the data analysis work can effectively reduce the time for result generation. The real-time analysis results are presented to the system users in time.

The relevant sensors connected to the edge device are used to complete the real-time collection of data required for human body recognition, and then, the data is correspondingly preprocessed and sent to the cloud. The cloud computing center uses related machine learning algorithms to complete the modeling work and send the model back to the edge device. The edge device uses the trained model to predict the posture behavior of the currently monitored object [33]. When the predicted result of the model does not match the current status quo, it can also inform the cloud computing center of the data and the correct result to correct the relevant model.

3. Innovation Experiment of Edge Computing Technology

3.1. Data Sources

The collection of physical data in this system will be completed by edge devices controlling several sensors. In the scene of human behavior recognition, the sensors that can be used as data sources for model training include numerical data collection sensors such as gyroscopes, IMUs, and heart rate measuring machines, and image data collectors including infrared and thermal image data collectors. In this scenario, IMU equipment is used to collect human behavior data, depending on system requirements 1and model characteristics.

3.2. Edge Computing Equipment

The receiving and sending of the collected data are the current data processing module and the human behavior recognition module that will be designed later, and an edge device that meets the system requirements is required to support the realization of the above functions. The most popular and developed device is the most mature. There are mainly series of products under platforms such as RaspbeerPi and Arduino. Commonly used devices are RPi B+, RPi2, and Arduino UNO. The device parameters are shown in Table 1.

3.3. Human Behavior Data Collection

In order to ensure that the collected data is more suitable for model testing, this experiment requires the monitored object to complete corresponding actions in accordance with predetermined instructions for the behavior tags required by the experiment, including a series of daily behavior actions and some behaviors under special circumstances, such as Falling, by providing devices including built-in mobile power, serial port, and Bluetooth interface. When the sensor is turned on, the data collector will automatically start working and will continuously transmit the collected physical data to the serial port at the set frequency. The collection equipment is shown in Figure 3.

3.4. Statistics

All data analysis in this article uses SPSS19.0, the statistical test uses a two-sided test, significance is defined as 0.05, and is considered significant. The statistical results are displayed as (). When the test data obeys the normal distribution, the double test is used for comparison within the group, and the independent sample test is used for comparison between the groups.

4. Experimental Analysis of Edge Computing Technology Innovation

4.1. Edge Computing Patent Changes

Through literature surveys and data from the patent office, we have made statistics on the changes in edge computing patents in recent years, as shown in Table 2.

From the table, we can see that with the development of time, the technology is constantly improving, and the patents of edge computing are increasing. In 2016, the patents of edge computing were only about half of that of cloud computing, but after 5 years with the development of edge computing, the patent of edge computing has approached cloud computing and has shown a trend beyond cloud computing, which shows that edge computing has great potential for development. In the experiment in this article, we need to collect human motion data through equipment. In order to ensure the effective stability of the data, we first test the collection capabilities of different equipment, as shown in Figure 4.

As shown in Figure 4, the three devices have different performances in data collection. RPi B+ has the largest acquisition range, and its optimal value is also the largest; the data collected by RPi2 is the most stable among the three devices, with the smallest fluctuations, large memory, and a large amount of data collection; the data collected by Arduino UNO is the best, and the fluctuations are also only slightly larger than RPi2, but its memory is minimal and the amount of data collected is minimal. Under comprehensive consideration, we decided to use RPi2 equipment for collection.

As shown in Table 3, we interpret the names and types of variables, and after sorting out the collected data, we obtain experimental data, which, respectively, contain the relevant data of a total of 8 men and women monitoring subjects.

4.2. Effects of Different Algorithms

We collected the data under different algorithms and compared the differences in the values during their operation, as shown in Table 4.

Through analysis, it can be seen that compared with other basic algorithms, edge computing runs faster and has greater advantages in precision and recall. Compared with weak classifiers such as decision tree, it integrates learning. The advantages of the algorithm are more obvious. Because edge computing has more regularization of the self-model than other models, making this type of model has a stronger generalization ability. In actual work, as the amount of data continues to increase, the accuracy of edge computing will be more obvious.

In order to illustrate the cost of running different algorithms, we let different algorithms run separately and count the consumption. Due to differences in hardware configuration, different computer execution results may have deviations, but the relative differences between the running times of different algorithms should be comparable. The specific data is shown in Figure 5.

From Figure 5, we can see that in terms of computational cost, the advantages of edge-based computing are not very obvious. Two indicators are higher than other calculation methods, but the cost in other indicators is much lower than other algorithms. Particularly in terms of Blogs and Astro, it is about 40% lower than other algorithms.

From Figure 6, we can see the monotonicity of the ranking of different algorithms in the real network. It can be seen that the edge-based algorithm has obvious advantages in this regard. Its ranking monotonicity is higher than Naive Bayes, MDD, etc., and its algorithm ranking monotonicity is more than 30% higher than other algorithms. We calculate the calculation period of different algorithms per unit time and get Figure 7.

From Figure 7, we can see that at the beginning of different algorithms, there is little difference in operating efficiency between them, but after running more than 600 times, the efficiency between the algorithms begins to increase, and the speed of edge computing is compared with other algorithms. The speed is getting bigger and bigger, and the time after 1000 transportation is more than 20% less than other algorithms.

5. Conclusion

In recent years, society has gradually entered the era of “big data,” and with the advent of cloud computing, the demand for big data processing and application functions is increasing. Cloud computing has been vigorously promoted due to its advantages such as low operating costs, dynamic scalability, and simplified operation and maintenance. Cloud computing-related industries have also developed rapidly in my country. From the perspective of patents, this paper implements a data collection and processing system based on edge computing. Through the coordination and interaction of edge devices and cloud computing centers, it solves the problems of the original system such as slow computing speed and high energy consumption. Corresponding designs and implementations are made for the specific applications of the system in different scenarios. Due to time and technical reasons, this article also has some shortcomings. When designing the transmission of human behavior data, the frequency of data transmission can be improved according to the on-site situation. For example, when the edge device detects that the human behavior has not changed for a long time, the transmission frequency can be slowed down. In order to better reduce the energy consumption of the equipment, when storing document data, because the collected data is in the form of key-value pairs, the more popular databases in recent years can be used, which can improve the efficiency of transmission and storage. These tasks need to be improved in future research.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This work was supported by the scientific research project of school of economics, Northwest University of political science and law in 2019 “Research on the effect of government subsidies on promoting independent innovation activities of science and technology enterprises” (19xyky19).