Advanced Sensor Technologies in Agricultural, Environmental, and Ecological Engineering 2021View this Special Issue
Research on Intelligent Recognition and Management of Smart City Based on Machine Vision
In recent years, with the rapid development of science and technology, people’s demand for the quality of daily production and life has gradually increased, and all localities are gradually urbanized. The convenient use of water conservancy, transportation, environment, medical treatment, power grid and other aspects of cities in any country affects people’s healthy life. Therefore, it is urgent to build a smart city. For smart city, intelligent identification and management is a very large and complex problem. As one of the high and new technologies, the intelligent recognition of machine vision derived from artificial intelligence has always been a hot spot of great concern. It just provides convenience for the development of smart city and becomes the development direction of smart city construction in the future. Based on this, the role of machine vision is to connect with the computer through wireless sensors such as cameras to simulate an eye that can represent human visual function. This simulated eye can be recognized intelligently in real time. It transmits the recognized information to the computer, and the computer will analyze and process the obtained information for judgment and recognition. This paper will mainly use the optimal threshold segmentation algorithm based on Machine Vision video processing to solve the problem of urban intelligent recognition, and use a specific algorithm to solve some difficulties and obstacles in intelligent recognition. The traditional optimal threshold segmentation algorithm, the optimal threshold segmentation algorithm and the improved optimal threshold segmentation algorithm are experimentally compared. After experimental comparison, it is found that the optimal threshold segmentation algorithm, the improved optimal threshold segmentation algorithm and the traditional optimal threshold segmentation algorithm can intelligently identify the types of buildings and urban traffic signs in the city, Compared with the improved optimal threshold segmentation algorithm, the improved optimal threshold segmentation algorithm improves the response speed, real-time performance, stability and accuracy of the algorithm. Therefore, the optimal threshold segmentation algorithm after the well meets the needs of building the system, and can be useful in the process of intelligent recognition. This improvement is also necessary, which is conducive to the subsequent system construction.
With the gradual improvement of people’s production and living standards, the convenience of life  has become the basic requirement of people’s multi city life, and cities cover all aspects, including urban environmental problems , urban traffic problems , urban population flow problems , urban medical problems , etc, Intelligent identification  and management of all aspects of the city can effectively improve the happy life index  of the city Bureau and provide many conveniences to urban residents. If you want the city to become intelligent , you mainly rely on getting information and data from all aspects of the city . People mainly rely on their eyes to obtain the main information , but it must be within the visible range of the naked eye, so the information obtained is very little, and the information obtained by human eyes can be said to be insignificant in such a huge range of cities. The development of machine vision technology  provides a new idea for building a smart city. Machine vision just solves the problem of regional limitations . In all parts of the city, the information obtained by human eyes is simulated by camera  and other wireless sensor devices  and transmitted to computer  for processing and analysis. It can realize the functions of guidance, positioning, measurement, detection and recognition. People’s eyes obtain information not only through their eyes, but also through their eyes to obtain surface information and transmit it to the brain. The brain operates at high speed to quickly analyze and process the obtained information into deeper information that is easier to understand. Machine vision is based on the principle that human eyes see everything in the world . Wireless sensor devices such as cameras are equivalent to human eyes, build a bridge and send it to the brain behind the computer for rapid analysis and processing, so as to obtain deeper information data behind it. Due to the changeable and complex urban environment, the traditional detection method  has a single feature, and machine vision divides the urban area . For the method based on single frame image, the single frame image algorithm  is used for detection and recognition, with fast processing speed; Based on the method of stereoscopic reference , images from different angles are captured from multiple cameras with different vision. Machine vision has obvious advantages over traditional vision. Compared with traditional manual detection, human eye judgment is subjective and sometimes makes mistakes. Machine vision can eliminate the interference of human subjective factors and avoid the detection results that vary from person to person. Not only that, it can also quantitatively analyze and describe the indexes of the tested objects , which reduces the detection and classification error and improves the efficiency and accuracy. Build a smart city to achieve healthy, reasonable and sustainable economy, harmonious, safe and more comfortable life, and intelligent information technology in management.
Nowadays, with the rapid development of science and technology, all aspects of people are gradually becoming intelligent, so the construction of smart city is urgent.
2. Machine Vision Endows Smart City
2.1. Smart City
Smart cities integrate information technology or innovative technology into urban construction, open and integrate urban systems and services, improve resource utilization efficiency, optimize urban governance service system and improve citizens’ quality of life. Smart city construction includes intelligent technology, intelligent industry and intelligent application. It is mainly reflected in transportation, power grid, medical treatment, environmental protection and many other aspects. The basic architecture of smart city is shown in Figure 1 below:
There are four aspects to build a smart city. One is the decision support system, including data mining and scenario analysis; Second, the operation and management system, including smart government affairs, smart commerce, etc; The third is shared service facilities, through service-oriented architecture such as cloud computing, and the fourth is data infrastructure, including basic database and thematic database; The last is the network infrastructure, including wireless communication network, optical fiber communication network and so on.
The former can comprehensively and deeply perceive. Through sensor technology, various sensor devices and intelligent systems are used for intelligent identification and three-dimensional perception, timely and actively analyze and process information such as urban environmental changes, so as to realize real-time perception of urban environment, improve urban environmental perception ability, and ensure the normal and efficient operation of various systems. The second is ubiquitous broadband connection. As a neural network in the smart city, it can be carried out randomly on demand to enhance the ability of urban intelligent service; The third is intelligent integration application, which integrates people’s wisdom through a new generation of perception technology, creates a smart city brain, promotes the combination of cloud and end, and promotes the intelligent integration of various application facilities; Fourth, people-oriented continuous innovation. Smart city construction is people-oriented and citizens participate. Gather the strength of the masses, proceed from the needs of the masses, cooperate and innovate, and jointly build an emerging city to achieve sustainable economic, social and environmental development. The machine vision algorithm generally has the method based on single frame image. The single frame image algorithm is used for detection and recognition, and the processing speed is fast; based on the stereo method, images from different angles are captured from multiple cameras with different vision.
2.2. Machine Vision Technology
The main principle of machine vision is to analyze and process the image information obtained by wireless sensor devices such as cameras and transmit it to the computer for in-depth analysis. The neural network is constructed by the method of deep learning, and the massive image data are learned to realize the accurate analysis of the object to be measured, and finally can be used for actual detection, measurement and control. Generally speaking, it is to simulate an eye that can represent human visual function by computer. It can realize the functions of guidance, positioning, measurement, detection and recognition. Machine vision technology has the advantages of high speed, large amount of information and many functions. It can basically complete high-intensity and complex computing work in any scene. A typical machine vision system is shown in Figure 2 below:
As shown in Figure 2, the machine vision system can enable the optical image system and image capture system to identify, analyze and process in the intelligent execution module through image acquisition and digitization and intelligent workstation. Machine vision technology can quickly and accurately capture a large number of signals, which is convenient for automatic processing and processing control information concentration. Compared with traditional manual measurement, it can eliminate the negative impact of various factors of individual subjects, avoid measurement varying from person to person, and make quantitative analysis and description of the indicators of measured objects, so as to reduce the classification error of measurement and improve efficiency and accuracy. Building an intelligent city will realize healthy, reasonable and sustainable enterprise operation, harmonious, safe and more comfortable people’s life, scientific, intelligent and information-based management.
2.3. Urban road Intelligent Recognition Technology
For the construction of a smart city, the intellectualization of urban roads is an important branch, especially important. It is a basic project, which has an important impact on people’s production and life. Therefore, due to the high automation of pavement, people can more friendly solve the practical social problems such as urban traffic congestion, events, air pollution and so on. The main function of machine recognition technology is road identification and tracking. First, identify the road marking line in front, the curvature in the horizontal direction of the road and the road surface with correct marking for the driving of motor vehicle lane; Identify the pavement boundary, the curvature of the horizontal pavement and the location and orientation of the motor vehicle lane boundary on the road surface without indication; Identify the use of lane change in the front or adjacent sections; Using pavement boundary characteristics to provide real-time processing, monocular vision to identify and track large objects in the front section, and to evaluate the movement of vehicles in front through monitoring and tracking; The pitch angle of visual inspection is used to enhance the accuracy of pavement boundary characteristics evaluation; By monitoring and tracking the movement of other vehicles in adjacent sections, high-speed manual driving is used; By detecting the intersection, we can know its length and locate the center position, which can be used to control the road turning behavior; Using color image to identify road markings can improve the poor real-time performance; Real time performance can be achieved by distinguishing the relative motion of mobile Walker and automatic vehicle; Determine whether there are motor vehicles and the safe lane change by detecting the front and rear motor lanes; Finally, by analyzing the decline of image contrast, the field of view can be estimated in foggy days. That is, Figure 3 shows the key technologies of urban road detection, and Figure 4 shows the system flow chart:
According to the system flow chart, this intelligent recognition technology first loads the image of the detected Road, then carries out image preprocessing and processing, generates road information according to the processing results, and finally stores the information.
2.3.1. Filtering of road Image
An original image that has not been processed has some problems such as noise interference to some extent, because these noise interference affect the quality of the image, resulting in the image becoming blurred and unclear, and the key feature points of the image to be detected become difficult to find. In this case, it becomes very difficult to analyze and process the image. The purpose of image smoothing is to eliminate the interference factors on the image. This method is also called low-pass filtering. The main purpose of this algorithm is to carry out spatial filtering algorithm. Generally, spatial filtering algorithm superimposes several signals within a certain spatial range and occupies the same frequency band.
Kalman filtering algorithm uses the linear state equation to optimally estimate the influence of noise interference in the obtained data. It is conducive to computer programming and is the most widely used filtering method. The algorithm estimates the real-time data and the corresponding state of the previous moment, and recurses through the system state transition equation. For any dynamic system, Kalman filtering model can be divided into state and observation equations:
Where represents the n-1 order state vector at time k, represents the m-1 order state vector at time k, represents the system time n × m Dynamic noise matrix of order r, indicates that the system changes from time k-1 to time k n × n order state transition matrix, represents the observation noise of order m-1 at time k of the system, represents the system K time m × n order observation matrix. Set is dynamic noise, is the observation nois.
BP network can also be called back propagation neural network. Its structure can be simply divided into three layers: feedforward, multi-layer and perceptron network. This algorithm repeatedly trains the collected sample data, and constantly modifies the network weight and threshold to minimize the error and achieve the desired goal for the output data. BP generally includes input layer, middle layer (hidden layer) and output layer. Each layer consists of several neuron groups and is completely connected with the previous layer by switching weights. In the input layer, when an input neuron receives a signal, it is transmitted to each neuron in the middle layer (hidden layer). In the middle layer (hidden layer), each neuron calculates the sum, and then the nonlinear activation function is used to generate the output signal and transmit it to the output layer. The processing of the last message transmitted to the output stage, that is, the subsequent upward increase, is considered to have been completed, and the starting layer is responsible for displaying the results to the outside world. If an actual loss occurs and the output does not match the expected output, an error return multiplication process needs to be performed. Output error starts from the output, adjusts and updates the weight and threshold of each layer, and gradually converts the transmission to the middle layer and input layer. The process of repeated information forward expansion and error recovery is to adjust the weight of each layer of the whole neural network.
In the calculation process, only when the set training times or the calculated global network error is less than the general value, if the global error accuracy error is very small, the whole learning process of the algorithm ends. If you are not satisfied with the results, you must update the training mode. A round of learning process runs again until the final conditions are met, and the BP three-layer network model can meet the requirements of high precision.
Kalman algorithm and BP algorithm have their own advantages. They are widely used in many fields such as machine vision technology and have very high practical value. However, there are still some deficiencies in the application of these two algorithms to the construction of smart city. For example, Kalman algorithm must be based on the existing accurate model and existing data for calculation. For large and complex systems, the establishment of accurate model and data acquisition are not so easy, and the measurement accuracy can not be realized. BP algorithm can meet the needs of high precision, but it has some problems, such as small part, slow speed and weak ability to external interference. In order to avoid these problems, the gray value of the image must be processed first:
In formula (1), the processing of image details is also weakened. Therefore, the combination of average method and weighted average method can improve the problem of image detail blur. The expression is:
In publicity (2), w (i,j) represents the value to be weighted for the corresponding pixel, which can be changed appropriately as needed. Generally speaking, in order to keep the average gray value of the processed image unchanged, the sum of each coefficient in the template is 1.
Median filtering, this algorithm can simultaneously remove the noise of the image and protect the image target boundary from being blurred. The nonlinear processing technology is adopted, and its expression is:
Where s is the field of point (x, y).
The steps of fast median filtering can be divided into 5 steps, as shown in Table 1 below:
Compared with the traditional median filter, the fast median filter algorithm can find the median of two windows in one arrangement, and the processing speed is also significantly improved. However, in the process of use, it should be noted that the window size should be appropriate, and too large window will also lead to the loss of effective signal, which will increase significantly.
This involves four concepts: shape sum, shape difference, shape opening and shape closing. Shape opening is expansion state and shape difference is corrosion state
I.e. form and (expansion):
Poor morphology (corrosion):
For the shape, because the shape on and shape off have the corresponding smoothing function, the singular points in the shape can be detected. Shape opening can remove all edge burrs and isolated patches in the image,
2.3.2. Image Edge Extraction
The identification of image boundary information is a key attribute to obtain image features in image recognition. It exists in the target, target, background and region. It is very key in image analysis and human vision. Generally, the boundary of the image is divided into two characteristics: orientation and amplitude. The image changing along the boundary orientation is uniform, while the image perpendicular to the boundary orientation changes sharply. The boundary detection operator checks the neighborhood between pixel points and measures the gray change rate. It also includes the definition of orientation and the way of convolution using directional derivative or mask. Robert boundary detection operator is also an operator that uses local difference operator to find pixel boundary. Its expression is:
Where f (x, y) is an input image with integer pixel coordinates.
2.4. Optimal Threshold Segmentation Algorithm
When processing the image, we will pay attention to the conspicuous part of the image, and ignore other details. However, the neglected parts may also have corresponding characteristic meanings. For the overall graph, they can be called the foreground or target of the graph, while other parts of the graph that are not interested are called the background of the overall graph, which is the graph segmentation task. To segment the picture, the first step is to divide the picture into several special regions in advance. These regions should have special properties, and then extract the key targets. This method is threshold segmentation. The first key of the threshold segmentation algorithm is to set the threshold points first, and compare the threshold with the gray value on the pixel points one by one, while the image cutting expands each image in parallel, and the cutting results can directly obtain the image area. In the process of segmentation, we hope to reduce false segmentation as much as possible, so the optimal threshold method can just avoid this problem. Shown in Figure 5 below:
Then the total error probability is:
In order to minimize the threshold of the error, e (T) can be derived from t so that the derivative is zero, and the noise of the whole image comes from the same noise source, then:
Then you can get:
Verification probability Or the noise variance is 0, then:
3. Improved Algorithm of Optimal Threshold Segmentation
In the optimal threshold segmentation algorithm, in order to solve this problem, each row and column of the image are regarded as an image unit, respectively, and the impact of the change of contrast will be greatly reduced. However, when a row or a column of pixels is used as a pixel unit, it will take a lot of time to complete the parameter comparison of the mixed probability density function between the target and the background, which will greatly increase the pressure of the system, and the corresponding real-time performance will become worse. Then the algorithm for improving the optimal prediction threshold will clarify the definition of the optimal prediction threshold, But at the same time, it also increases the real-time performance of calculation. First, the initial value of the threshold value shall be initially divided, and its expression is:
3.1. Concept of Optimal Threshold in Segmentation
In general, the pixel gray distribution of the image follows the normal distribution Where x is the gray sample population. Then the confidence is:
The confidence interval of gray value obtained can be identified as the interval of gray value:
Where is the sample, n is the number of samples, as μ Estimate of. be The estimated value of 2 is: last:
3.2. Improved Optimal Threshold Segmentation Algorithm for Sequential Images
At this time, the light of two light sources is fixed, and the light of two images is fixed, then make:
Where and The relative error of is:
For this algorithm, extract δ 5000 independent samples of δ It is divided into 15 sections, as shown in Table 2 below:
As shown in Table 3 below:
4. Experimental Simulation
In order to test the accuracy and operation speed of the algorithm of the development system, the optimal threshold segmentation algorithm, the improved optimal threshold segmentation algorithm and the traditional optimal threshold segmentation algorithm are used to intelligently identify the types of urban buildings and urban traffic signs. Urban buildings can generally be divided into civil buildings, industrial buildings and agricultural buildings, Civil buildings can be subdivided into residential buildings and public buildings; Urban traffic signs can generally be divided into seven categories: warning signs, prohibition signs, indication signs, direction signs, tourist area signs, road construction safety signs and auxiliary signs.
4.1. Intelligent Recognition of Urban Building Types
This experiment will be divided into three groups for recognition. The above three algorithms will recognize and analyze the images intercepted in the aerial view of urban buildings. The intercepted images are shown in Figures 6–8:
From the Table 4, comparing the three algorithms, the slowest running speed of the improved optimal threshold segmentation algorithm is 58 MS/frame, the fastest is 47 MS/frame, and the average running speed is 50 ms/frame; The slowest running speed of the optimal threshold segmentation algorithm is 62 MS/frame, the fastest is 50 ms/frame, and the average running speed is 56 MS/frame; The slowest running speed of traditional threshold segmentation algorithm is 97 ms/frame, the fastest is 83 ms/frame, and the average running speed is 92 ms/frame. In the recognition accuracy, the improved optimal threshold segmentation algorithm also has the highest accuracy. Therefore, the optimal threshold segmentation algorithm is the best regardless of the real-time and accuracy of detection.
4.2. Intelligent Identification of Urban road Sign Types
They will be divided into three groups for identification, and the urban road sign images will be intercepted for identification and analysis. The intercepted road sign images are shown in Figure 9 below:
The detection results of the three algorithms are shown in Table 5, Figures 10 and 11.
From the experimental data, the recognition results and running speed of the improved optimal threshold segmentation algorithm are more accurate and faster than the other two algorithms. The slowest running speed of the optimal threshold segmentation algorithm is 93 ms/frame, the fastest is 69 ms/frame, and the average running speed is 73 ms/frame; The average running speed of the improved optimal threshold segmentation algorithm is 59 MS/frame; The average running speed of the traditional threshold segmentation algorithm is 91 MS/frame. In the recognition accuracy, the improved optimal threshold segmentation algorithm also has the highest accuracy. Therefore, the optimal threshold segmentation algorithm is the best regardless of the real-time and accuracy of detection.
4.3. Compare the Response Speed of the Three Algorithms
In order to verify the real-time and accuracy of the improved optimal threshold segmentation algorithm, the improved optimal threshold segmentation algorithm will be compared with the improved optimal threshold segmentation algorithm and the traditional optimal threshold segmentation algorithm. The experimental environment will be in CPU: PIV 1.7 g; Memory: 512 M; The image size is 320 × 240.
When building the system, the algorithm is the foundation and foundation of a system. The response speed of the algorithm determines the quality of the system. Therefore, it is also very important to compare the response speed of the algorithm. Five groups of experiments will be conducted, with no unit of milliseconds. The experimental data results are shown in Figure 12 below:
The improved optimal threshold segmentation algorithm also has the lowest response time.
The accuracy results of the three algorithms are shown in Table 6 and Figure 13:
In terms of the accuracy of the number of processed image frames, the improved optimal threshold segmentation algorithm has higher accuracy than other traditional optimal threshold segmentation algorithms and the improved optimal threshold segmentation algorithm, and the processing accuracy is also increasing according to the increase of the number of processed frames. The accuracy of the improved optimal threshold segmentation algorithm is higher than other traditional optimal threshold segmentation algorithms and the improved optimal threshold segmentation algorithm, and the processing accuracy is also increasing according to the increase of the number of processing frames.
4.4. Evaluation Results
In this experiment, the algorithm intelligent recognition experiment is carried out under the rainy and sunny conditions, as well as the congestion and unblocked conditions of road conditions in the urban environment. Rainy days, low visual conditions, sunny days, high visibility; Road congestion has a certain impact on the response time and accuracy of intelligent identification algorithm, but the improved optimal threshold segmentation algorithm in this paper is better than the other two algorithms, so the improved algorithm is very necessary for building the system.
For different scenarios, the accuracy of the improved optimal threshold segmentation algorithm is greatly improved. Compared with the unmodified optimal threshold segmentation algorithm, it is improved in terms of algorithm response speed, real-time performance, algorithm stability and accuracy. Therefore, the optimal threshold segmentation algorithm after the well meets the needs of building the system, and can be useful in the process of intelligent recognition. This improvement is also necessary, which is conducive to laying the foundation for the subsequent system construction.
After the analysis of the above four stages, first of all, the intelligent identification and management construction of smart city will be of great help to people’s production and living. It is very convenient for people and can effectively improve the quality of life. For smart city intelligent identification management, this is indeed a very large and complex project. It can not be completed only by relying on people’s hands in many aspects such as urban environment, roads, transportation and so on. Machine vision is the crystallization of human wisdom. As a branch of artificial intelligence, it collects and analyzes all kinds of information in the city by means of wireless sensor devices, and solves a great problem. As one of the emerging technologies, machine vision will also become a hot research object in the future. It can be applied not only to the construction of smart cities, but also to many large manufacturing industries such as industrial manufacturing, which can provide great help in economic development and construction.
Nevertheless, today, with the development of science and technology, the existence of artificial intelligence can be felt only in the first and second tier cities. For cities with slightly backward development, artificial intelligence is not so common. Therefore, in the future, scientific and technological innovation still needs continuous efforts and learning, so as to stand first in the world and truly improve people’s quality of life, Therefore, the development of machine vision still has a long way to go.
The experimental data used to support the findings of this study are available from the corresponding author upon request.
Conflicts of Interest
The authors declared that they have no conflicts of interest regarding this work.
The work is partially supported by the Natural Science Foundation of Chongqing (cstc2020jcyj-msxmX0943) and supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No. KJQN202001901,No.KJQN202001903).
C. S. Fischer, “The public and private worlds of city life,” American Sociological Review, vol. 46, no. 3, pp. 306–316, 1981.View at: Publisher Site | Google Scholar
J. E. Hardoy, D. Mitlin, and D. Satterthwaite, Environmental Problems in an Urbanizing World, Earthscan, London, UK, 2001.
S. Salcedo-Sanz, L. Cuadra, E. Alexandre-Cortizo, S. Jiménez-Fernández, A. Portilla-Figueras, and E. Alexandre-Cortizo, “Soft-Computing: An innovative technological solution for urban traffic-related problems in modern cities,” Technological Forecasting and Social Change, vol. 89, pp. 236–244, 2014.View at: Publisher Site | Google Scholar
A. M. Sobolev and S. Deguchi, “Pump cycles and population flow networks in astrophysical masers: an application to class II methanol masers with different saturation degrees,” The Astrophysical Journal, vol. 433, no. 2, pp. 719–724, 1994.View at: Publisher Site | Google Scholar
A. Al-Motarreb, M. Al-Habori, and K. J. Broadley, “Khat chewing, cardiovascular diseases and other internal medical problems: The current situation and directions for future research,” Journal of Ethnopharmacology, vol. 132, no. 3, pp. 540–548, 2010.View at: Publisher Site | Google Scholar
M. Jo and H. Y. Youn, “Intelligent recognition of RFID tag position,” Electronics Letters, vol. 44, no. 4, pp. 308-309, 2008.View at: Publisher Site | Google Scholar
A. P. Mller, “Successful city dwellers: a comparative study of the ecological characteristics of urban birds in the Western Palearctic,” Oecologia, vol. 159, no. 4, pp. 849–858, 2009.View at: Publisher Site | Google Scholar
J. Pearl, “Probabilistic reasoning in intelligent systems: networks of plausible inference (Judea pearl),” Artificial Intelligence, vol. 48, no. 8, pp. 117–124, 1990.View at: Google Scholar
B. Yu, H. Liu, J. Wu, Y. Hu, and L. Zhang, “Automated derivation of urban building density information using airborne LiDAR data and object-based method,” Landscape & Urban Planning, vol. 98, no. 3-4, pp. 210–219, 2010.View at: Publisher Site | Google Scholar
R. J. Bootsma, S. Ledouit, R. Casanova, and F. T. J. M. Zaal, “Fractional-order information in the visual control of lateral locomotor interception,” Journal of Experimental Psychology: Human Perception and Performance, vol. 42, no. 4, pp. 517–529, 2016.View at: Publisher Site | Google Scholar
V. Satti, A. Satya, and S. Sharma, “An automatic LEAF recognition system for plant identification using machine vision technology,” International Journal of Engineering Science & Technology, vol. 5, no. 4, pp. 874–879, 2013.View at: Google Scholar
Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000.View at: Publisher Site | Google Scholar
T. Overly, G. Park, K. M. Farinholt, and C. R. Farrar, “Development of an extremely compact impedance-based wireless sensing device,” Smart Materials and Structures, vol. 17, no. 6, 2008.View at: Publisher Site | Google Scholar
R. I. Hartley and A. Zisserman, “Multi-view geometry in computer vision,” Kybernetes, vol. 30, no. 9/10, pp. 1865–1872, 2019.View at: Google Scholar
D. Sanai, “You should see it on a Friday night what kind of people would you expect to see on a typical street corner of Britain such as this one? Shoppers, schoolchildren and pensioners by day, certainly. But after Darkthe pubs empty and then the fighting starts,” Journal of the Society of Materials Science Japan, vol. 27, pp. 1158–1164, 1978.View at: Google Scholar
H. Luo, L. Chen, Z. Li, Z. Ding, and X. Xu, “Frontal immunoaffinity Chromatography with Mass Spectrometric Detection: A method for finding active compounds from Traditional Chinese Herbs,” Analytical Chemistry, vol. 75, no. 16, pp. 3994–3998, 2003.View at: Publisher Site | Google Scholar
C. S. Fuh, S. W. Cho, and K. Essig, “Hierarchical color image region segmentation for content-based image retrieval system,” IEEE Transactions on Image Processing, vol. 9, no. 1, pp. 156–162, 2000.View at: Publisher Site | Google Scholar
Y. Yitzhaky, “Restoration of an image degraded by vibrations using only a single frame,” Optical Engineering, vol. 39, no. 8, pp. 2083–2091, 2000.View at: Publisher Site | Google Scholar
C. Zhou and J. Zhou, “Single-Frame Remote Sensing Image Super-Resolution Reconstruction Algorithm Based on Two-Dimensional Wavelet,” in 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), pp. 360–363, Chongqing, China, 2018.View at: Publisher Site | Google Scholar
N. Inaba, K. Ueda, and S. Yamauchi, “Magnified stereoscopic radiography of the skull (II. Radiographic Technology for Stereoscopic Magnification),” Japanese Journal of Radiological Technology, vol. 39, no. 3, pp. 333–336, 1983.View at: Publisher Site | Google Scholar
J. Taube, R. Muller, and J. Ranck, “Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis,” Journal of Neuroscience, vol. 10, no. 2, pp. 420–435, 1990.View at: Publisher Site | Google Scholar