Abstract

This paper proposes a novel signal source identification system composed of unmanned aerial vehicles (UAVs) and a blockchain, in which the identification method makes full use of binocular camera data and received signal strength. The UAV tasks are organized by using blockchain technology and smart contracts. To tackle the challenge that the transmit power of the object and the channel path loss coefficient are unknown to the UAV, a maximum likelihood estimation method is developed to estimate the parameters in the path loss log-normal shadowing model. Then, the mean squared error is used as the metric to distinguish the signalling object. The simulation results show that the proposed method can effectively complete the task. Also, a mobile edge computing- (MEC-) enabled UAV testbed system is designed and implemented in real environment. The system works accurately where the number of candidate objects is 3.

1. Introduction

Radio signal source identification provides essential support in many applications [13]. However, except the basic information of the object (such as shape and outline), the parameters of the signal are usually unknown. Therefore, in a complex electromagnetic environment, it needs to first screen out the potential targets and then identify the one which is sending signals. The traditional ways of object detection and radio direction finding are easily constrained by their own mobility and rely on manual operation and control. Therefore, to improve the efficiency, accuracy, and reliability of the identification, as well as reduce the cost, an integrated and shareable aerial platform is required.

Technology advances in unmanned aerial vehicles (UAVs, also known as drones) have enabled them to carry multiple types of onboard sensors. Cameras are one kind of sensors that have been commonly used. A UAV carrying cameras can provide a clearer view from the air than ground while allowing flexible movement. Cameras carried on UAVs can be classified into many types, such as single camera [4], multiple cameras [5], and spectral cameras [6]. A binocular camera is a single camera with two lenses and is capable of outputting both graphs and depth information, thus arousing interest in UAV applications, such as UAV navigation [7], obstacle avoidance [8], and localization [9]. Furthermore, road detection [10] and photogrammetric measurements [11] can be also solved with UAV binocular vision.

Meanwhile, the reduced cost and weight of Software-Defined Radio (SDR) equipment make it feasible for UAVs to carry an SDR onboard. The application fields of UAVs carrying SDRs can then be extended to wildlife tracking, search, rescue, etc. For example, in [12], two SDR-based UAV-assisted wildlife tracking methods are presented: one is based on four single-stage antennas using the Doppler effect and the other one using Yagi omnidirectional antennas for signal reach angle. Ref. [13] presents a UAV system carrying VHF telemetry equipment that includes a hexacopter UAV, an onboard computer, an SDR receiver, a directional antenna, and a control laptop on the ground. In paper [14], to search for survivors after a disaster, the authors use the UAV carrying an SDR as a GSM base station to receive signals sent by the target’s GSM device and locate the target by the received signal strength and UAV coordinates.

As UAVs are getting more powerful and smarter nowadays [15], automatic UAV-assisted radio source detection becomes a potential solution and a hot research topic. An automatic UAV trajectory planning method for ground object localization is proposed in [16]. The solution locates ground objects by received signal strength and automatically finds paths with high localization accuracy and low energy consumption by reinforcement learning. A method for locating illegal base stations via received signal power in the case of unknown channel model and unknown noise model is proposed in [17]. It takes advantage of directional antennas and controls the UAV by -learning algorithms. However, all the above methods have only been validated in numerical simulations. In reality, the real-time performance of the system, due to heavy data transmission stress and computation demands, can be a major challenge to be settled in field experiments.

Considering a practical commercial application scenario, a user may not afford such a UAV fleet and it is also not necessary. Therefore, UAVs may be from a heterogeneous origin and form a computing paradigm of multiple autonomous agents. In the meantime, the air-ground integrated edge computing system has been put into practice [18]. In such an architecture, how to ensure secure communication and cooperation between multiple parties becomes a key problem. Blockchain [19, 20] has been proven to be very useful in various IoT applications. In literature [21], the authors describe a method of organizing the communication protocol, which allows agents of the multiagent system (MAS) to make decisions about their actions. The paper shows how to implement an autonomous economic system with UAVs and organize a communication system among agents in a peer-to-peer network using the decentralized Ethereum blockchain technology and smart contracts. “BUS” proposed in [22] is a UAV swarm-assisted data acquisition scheme in which data is collected from IoT devices via a UAV swarm and then stored in the nearest server with the assistance of blockchain. A smart contract is employed to handle the IoT devices and missions in BUS. In another work, the selection of the UAV for the desired quality of network coverage and the development of a distributed and autonomous real-time monitoring framework for the enforcement of service-level agreement (SLA) are introduced [23]. It builds a novel blockchain architecture that relies on machine learning techniques to monitor and penalize UAVs that violate SLA. In [24], the authors propose a new type of blockchain to resolve critical message dissemination issues in a vehicular ad hoc network (VANET), which can be used as a reference in our work.

In this paper, partly motivated by [25], we design and implement a mobile edge computing- (MEC-) enabled UAV system equipped with onboard SDR and binocular camera, one for each. A signal source identification algorithm that fuses the visual depth information and received power strength is proposed. The organization of tasks of UAVs is enabled by blockchain. Our method can work in situations where the objects’ transmit power and channel parameters are unknown.

The rest of this paper is organized as follows. Section 2 briefly introduces the overall architecture of the system. Blockchain preliminaries are presented in Section 3. In Section 4, the system model, the four major parts, the formulation of the problem, and the design of the algorithm are described in detail. Section 5 gives the experimental results, including the results of the semiphysical simulation and the real-world experiment. Finally, Section 6 concludes the paper.

2. System Architecture

The proposed scenario consists of a target area, a hiding signal source, various UAVs, some users, a ground access point, edge computing servers, network services, and a blockchain, as shown in Figure 1. A user would like to make an order for signal source identification. With the help of the servers, a smart contract is generated with the order data (the purpose of the order, attribute data, participators, etc.) and then transferred to the blockchain. Any unoccupied UAVs can accept the contract which contains information about the task.

Then, the UAV makes a scheduled flight and informs the user about the result of the task. During the flight, the data collected by the UAV will be sent to the edge servers for analysis. After returning to the base, the UAV notifies the servers that the order is completed. This ensures the security and reliability of the proposed system.

3. Blockchain Preliminaries

The blockchain is essentially a distributed public ledger, where each transaction is recorded in a block. Each block is identified by its hash. Each block not only contains the content of the transaction and timestamp but also references the hash of its previous block. These blocks are linked by reference hash and then superimposed into a “chain” in chronological order to create a blockchain. A blockchain network is a peer-to-peer network composed of a group of nodes. Each node operates the same blockchain through its own copy. The blockchain is a distributed data structure that is copied and shared among network members. We assume that each user conducts transactions on the network through miners (i.e., nodes). The literature [26] summarizes that the operations of nodes in a blockchain network follow the below steps: (1)The user interacts with the blockchain through a pair of private/public keys. When initiating a transaction, the transaction initiator needs to sign the transaction with a private key, and those transactions are addressable on the network through the initiator’s public key. When a new transaction is generated, it will be broadcast to other participating nodes in the blockchain network(2)After the transaction is broadcast to the entire network, within an agreed time interval, each node will collect several unverified transactions and hash them into a time-stamped block. Each block can contain hundreds or thousands of transactions(3)Each node performs a Proof-of-Work (PoW) calculation equivalent to solving a mathematical problem to determine who can verify the transaction. Letting the fastest node in calculation do the job is the way to achieve consensus. The node with the fastest PoW calculation will propagate its own block to other nodes. This calculation process is a “mining” process to derive the effective hash of the new block(4)The node that obtained the verification right broadcasts the block to all nodes, and other nodes will confirm whether the transactions contained in this block are valid. If valid, add this block to the blockchain and apply the transactions it contains to update the world view of all nodes. Otherwise, this block will be discarded. Once the block is accepted by the blockchain, the transactions contained in the block are part of the blockchain and cannot be changed in any way(5)Once all nodes accept the block, the blocks that have not previously completed the PoW work will be invalidated, and each node will reestablish a block to continue the next PoW calculation work

In essence, a smart contract refers to a piece of conditional statement code that runs on the blockchain. When two parties of a smart contract generate an asset transaction on the blockchain, this piece of code is triggered to automatically complete the specific transaction process. Smart contracts are special accounts on the blockchain, which contain addresses, balances, status, and codes. The address is a unique identifier for the account, just like a regular user account. The smart contract works as follows: (1)Construction of smart contracts: smart contracts are jointly formulated by multiple users in the blockchain and can be used for any transaction behavior between any users. A contract clearly stipulates the rights and obligations of both parties to the transaction electronically. The code contains conditions that will trigger the automatic execution of the contract. All possible results of the contract should be described in the smart contract(2)Storage smart contracts: once the coding is completed, the smart contract is uploaded to the blockchain network; that is, all nodes on the entire network can receive the contract(3)Executing smart contracts: the smart contract will periodically check whether there are related events and trigger conditions and push the events that meet the conditions to the queue to be verified. The verification node on the blockchain first performs signature verification on the event. After most nodes reach a consensus on the event, the smart contract will be successfully executed and the users will be notified. The execution result of a smart contract must be deterministic; that is, the same input always produces the same output. Because all interactions with the contract are performed through signed messages on the blockchain, all network participants can obtain the cryptographically verifiable trace of the contract’s operations

4. System Model

The architecture of the proposed blockchain-based signal source identification system is shown in Figure 2. There are three roles: miners, initiators, and participants. In the system, participants have backbone ground edge units to support UAV manipulating, data processing, negotiating with miners, and transmitting data to initiators.

Miner: the miner is a trusted and authenticated node that has multiple roles. First, it acts as a broker between initiators and participants. It accepts requests from initiators and matches them with participants. Second, it is a maintainer of the stable operations in the blockchain system which packages all kinds of data and smart contracts onto the blockchain. There could be a few miners, such as servers of the different public third-party platforms, in a real scenario.

Initiator: the initiator refers to the users in Section 2. The object detection or signal source identification task is launched by the initiator. Then, the initiator sends this task to the miner and waits for the assignment and draft smart contract. Smart contract and data related to the transaction will be handed to the miner to write onto the blockchain. Detection data will be returned by participants directly. Finally, the initiators make payment through the blockchain by using digital currency.

Participant: the participant refers to the UAV. It is controlled by a ground edge unit. The participant can evaluate the recruitment from the miner and accept the task. It reads the parameters related to the object identification task and the smart contract from the blockchain. Then, it flies to the target area and sends the computation results to the initiator.

After matching and agreement on the smart contract, the object detection and signal source identification task is technically carried out in three major steps, i.e., visual-based object detection and tracking, binocular depth estimation, and wireless signal-aerial image fusion-based signal source identification. The flow chart of the method is shown in Figure 3.

4.1. Blockchain and Smart Contract

The design goals of introducing blockchain and smart contract lie in two aspects:

4.1.1. Reliable Collaboration

We need to ensure the authenticity and reliability of initiators and participants. To this end, the blockchain guarantees that the synchronization and consistency of quality of service (QoS) of each UAV, and the payment is correctly made by the user.

4.1.2. Security and Privacy

We assume that the initiator and participants are all semihonest nodes who obey the agreement and honestly execute the tasks. However, they may want to probe into others’ data, either individually or collusively. On the other hand, both sides do not want to expose their identities. Therefore, the proposed blockchain-based architecture combines secure schemes such as asymmetric key cryptography, ring signature, and consensus mechanism. All the operations are performed in a privacy-preserving manner.

According to Figure 2, there are totally eight steps for the whole process.

Step 1. The initiator sends its task requirement and remuneration offering to the miner which is encrypted by its private key and with ring signature.

Step 2. The miner decrypts the requirement and recruits the participant with an optimal matching algorithm according to predefined conditions. The offer is sent with the encryption of the miner’s private key.

Step 3. The participant decrypts the offer and chooses to accept or deny it in his own interest. If accepted, its parameters and the public key will be encrypted with the miner’s public key and then sent back to the miner.

Step 4. The miner then answers the initiator with the information of the participant in an encryption manner.

Step 5. Then, a smart contract between the initiator and the participant is created in both parties’ confirmation.

Step 6. The smart contract is deployed in the blockchain framework. The miner also builds a secure channel and provides a pair of keys for data exchange of both parties.

Step 7. After receiving the parameters of the task from the initiator (a user) via the secure channel, the participant (a UAV) flies to the target area and performs the identification task.

Step 8. When the task is completed, the result will be returned to and validated by the initiator via the secure channel. Finally, the payment is made using digital currency according to the smart contract.

4.2. Object Detection and Object Tracking

The visual target detection and tracking part mainly complete the subtask of tracking and identification of specific types of targets within the UAV field of view. Target detection is one of the key technologies to improve the perception capability of UAVs and is of great significance. Compared with the traditional methods based on manual features, the deep learning methods based on convolutional neural networks have powerful feature learning and expression capabilities and become the mainstream algorithms for target detection tasks at present [27].

Aerial photography generally has the following characteristics because the imaging perspective is different from natural scene images. (i)Complex background(ii)Small targets(iii)Large field of view(iv)Rotation

Inspired by [27], the visual target detection part adopts the YOLO-based target detection framework [28], and on this basis, the algorithm is optimized and adjusted based on the characteristics of UAV data collection and the difficulties of recognition, via the drone vehicle datasets [29] and DOTA datasets [30], as shown in Figure 4. For the target tracking part, the Deep SORT algorithm [31] is used to predict the movement probability of the object and calculate the difference between the front and back frame features of the object and then complete the continuous tracking of the target.

We perform data augmentation for both datasets of images. Data augmentation is a technique that artificially extends the training dataset by allowing limited data to produce more equivalent data. It is an effective means to overcome the shortage of training data. We mainly use random rotation, color transformation, blurring, noise injection, and hybrid image processing to enhance the diversity of the input data. Then, the YOLO pretraining network is trained using the enhanced data to fine-tune the weights of the convolutional layer parameters and migrate the object recognition task from the ground view to the object recognition task in the UAV view.

In the overall system, the signal source identification algorithm needs to take the visual detection result and the corresponding object ID as input; thus, we add a tracking algorithm in addition to YOLO’s object recognition to ensure that each detected object corresponds to its ID. As an algorithm commonly used in Multiobject Tracking (MOT), Deep SORT is a detection-based tracking method of good performance and high industrial interest. The main process of the MOT algorithm is as follows: (1)Given the original video frame, run a target detector such as YOLO for detection and obtain target detection frames(2)Take out the interested targets in all target frames and perform feature extraction (including apparent features and motion features)(3)Perform similarity calculation to calculate the matching degree between the targets in the front and back frames (the distance between the features belonging to the same target before and after is relatively small, and the distance between different targets is large)(4)Associate the data, and assign the ID of the target to each object

4.3. Binocular Depth Estimate

The binocular depth estimate part is to complete the subtask of estimating the depth to the identified object, i.e., the distance of a UAV equipped with a binocular camera to the identified target object.

In this work, we choose the SGBM algorithm to estimate the depth between objects and the camera [32]. Firstly, the binocular disparity measures the horizontal disparity of the surface point on the target object between the video frame taken by the left camera and the right camera is estimated by energy optimization [33] as shown in

Then, the obtained disparity map is converted into a depth map based on the relationship between disparity and depth. Finally, the distance from the target to the camera is extracted from the depth map. More specifically, the disparity value of each pixel is calculated with respect to the right eye view using the left eye view as the reference. We construct an energy function to estimate the optimal disparity image by minimizing the energy value using the following equation:

The first term is the sum of matching cost for every pixel in the disparity image corresponding to the captured left video frame. is the pixel-wise matching cost function with disparity corresponding to the minimum cost for pixel . In the current implementation, the matching cost value is calculated based on Mutual Information (MI) [34]. The second term adds a constant penalty for all pixels in the neighborhood of , if the disparity between and is 1, while the third term adds a larger constant penalty if the disparity between and is larger than 1. Operator equals 1 if event occurs; otherwise, it equals 0. Constant is chosen larger than .

Based on the mapping between disparity and depth value, the depth image according to the left-view video frame can be evaluated. The depth information of the surface point on the target object using disparity value is computed as follows: where is the baseline of the binocular camera and is the camera focal length.

To obtain the distance between the target object and the camera, the bounding box of the object in the video frame needs to be computed based on the object detection and tracking algorithm. The pixels within the bounding box are sampled at an interval of 2 to 5 pixels, and the depth value is indexed from the corresponding depth map. The average depth value of the sampled pixels is then used to represent the actual distance from the object to the camera.

4.4. Fusion Object Detection

So far, we have found a number of suspicious objects distributed in the area, which are similar in appearance. Suppose that only one of them is the real object which is communicating with the outside world at a fixed wireless frequency band, and we need to identify it. Some existing works utilize a wireless network between UAVs and objects to help localization [35, 36]. However, in both cases, the information of the target is known beforehand. The wireless connection is only used to transmit data, power, and arrival of time for distance calculation. To this end, the proposed method utilizes a UAV equipped with a binocular camera to obtain the distances between the object and the UAV at some measurement points of the trajectory, by applying binocular depth estimation after using deep learning-based target detection. At the same time, the received power is obtained utilizing the UAV onboard SDR. As shown in Figure 5, the UAV moves simultaneously along the path until the desired data has been collected.

Let denote the set of objects that the UAV detected in the flight area. Arbitrarily select an object and take it as the source target sending signals. Then, let the received power measured by the UAV at position be as same as (assume that only is the transmitting signal and note that is anonymous to the UAV) and the distance estimated by binocular depth estimation at position be . The two types of data collected during the flight have a correspondence such that the received power from the object measured at positions constitutes a vector , and the binocular depth estimated distance of the object measured at positions constitutes a vector , as shown in

Consider that the received wireless signal power satisfies the path loss log-normal shadowing model [37]. where is the received power in dB at distance from the object, is the received power in dB at distance from the object, is the path loss exponent, and is the environment-dependent shadow fading coefficient which obeys the Gaussian distribution with zero mean and variance .

To determine whether object is sending signals, the distance vector and power vector of object can be substituted into the path loss log-normal shadowing model to determine if they match the model well. In the case that the parameters and in the log-normal model are unknown, they (when is assumed to be the target object) need to be calculated first. By utilizing maximum likelihood estimation, the procedure for evaluating the parameters of the path loss log-normal shadowing model for object is as follows.

Let the reference distance in the path loss log-normal shadowing model be equal to 1 m, implying that is the received power at a reference distance of 1 m. Since is subject to Gaussian distribution with mean and variance , we can define the following likelihood function:

Taking the likelihood function logarithmically, we can obtain

Then, to calculate the maximum likelihood estimate of the parameters, we can derive equation (8) and find , , and , such that the partial derivatives are zero, i.e.,

After calculation, we can obtain the following estimates for , , and : where

Using the parameters obtained from the maximum likelihood estimation and , we can obtain the fitted value of the received power according to equation (6) as

To measure the extent to which object conforms to the path loss log-normal shadowing model, we choose the mean squared error (MSE) of the fitted received power and the measured received power as the metric. Then, the MSE of object is defined as

After the UAV detects all objects in the flight area, it compares the MSE of all objects and identifies the object with the smallest MSE as the source target, i.e.,

In summary, the fusion-based signal source identification algorithm is shown in Algorithm 1.

Input: objects set S=Ø, initiate corresponding distance data vector ds=Ø, power data vector ps
Output: signal source target
1. while UAV keeps flying do
2. if new object s’ is detected then
3. do S = S ∪{s’}
4. end if
5. obtain current power p from SDR measurement
6. for s in S do
7. obtain current distance d of object s from binocular depth estimate
8. if d and p is valid then
9. update d and p to the end of ds and ps
10. end if
11. end for
12. if data achieves certain amounts then
13. target = None
14. for s in S do
15. calculate and of object s
16. calculate MSEs
17. update target by comparing MSE
18. end for
19. end if
20. end while
4.5. Hardware Implementation

We also build a testbed to evaluate the system performance. The hardware component list is described in Table 1.

The experiment system consists of two major parts, the air system and the ground system, as seen in Figure 6. The air system contains various UAVs (in our case, hexacopter UAVs loaded with Raspberry 4Pi B model, which links the binocular camera and the SDR by a USB cable and communicates with the ground system through Wi-Fi). The ground system includes a laptop with a discrete GPU, in which the costly computations are conducted, and a Wi-Fi AP to connect all devices through wireless communications. The laptop is also responsible for simulating the miner in the blockchain network. We use smartphones to play the role of the initiator. All tasks are launched from smartphones. The UAV system is shown in Figure 7. The Raspberry Pi 4 and SDR are mounted in the UAV, and the camera is tied in angle to the ground.

5. Result and Analysis

We used semiphysical simulation to verify the effectiveness of the fusion algorithm. First, we used SDR on the ground to collect power data at different positions, and then, we calculated the path loss coefficient of the actual channel and the received power at a reference distance of 1 m by linear regression. As shown in Figure 8, the object was a USRP B210 device from Ettus Corp, and diagrams for transmitting were written in GNU Radio software. The waveform of the signal was a periodic sine waveform, and the transmitter frequency was set to 907 MHz-927 MHz, the power was set to 1 W, and the transmitting gain of the antenna was 50 dB. The object was placed at 1.7 m from the ground to simulate the signal propagation in the air as high as possible. We measured the data at each 1 m interval on the ground, and the distance range from the source was from 1 m to 30 m. The experiment was repeated 10 times, and the total received power values were obtained for 300 locations. Figure 9 shows the measured power and distance. After calculation, we obtained , , and .

5.1. Security Analysis

In our system, all participants and initiators are assumed to be semihonest nodes. Based on this assumption, we are going to analyze the security objectives in this section.

5.1.1. Anonymity between Initiators and Participants

To hide the identities of both the initiator and participants, we use the ring signature proposed by Rivest et al. [38]. The ring signature can hide the real signer. And if the asymmetric encryption function is used in the signature, the security of the signature can be enhanced. The ring signature in our system is based on the asymmetric encryption function.

5.1.2. Data Security

All data segments are encrypted with the receiver’s public key; without the receiver’s private key, the attacker cannot crack the cryptosystem and obtain the encrypted data segments. At the same time, the key exchange and data communication are through the secure channel created by the miner, which enhances the security.

5.2. Performance of Semiphysical Simulation

We first placed the objects randomly in an area with the size of , generated the coordinates of the objects, and selected an object as the source target. After that, we generated a sequence of coordinates at uniform intervals of 1 m according to the distance to the object. This sequence is the coordinate sequence of the simulated flight path of the UAV. Then, according to the measured channel parameters, the source distance was substituted into the path loss log-normal shadowing model to generate the corresponding measured power value. Next, for all objects, the distance was calculated based on the coordinates of the UAV flight path and the distance of the object coordinates. Noting that the UAV flight passed through points, we set up an object distance vector with a size of , as well as a power vector. Figure 10 shows the identification accuracy of the proposed fusion method under different numbers of potential objects. When the number of objects is 2, the accuracy decreases slightly with the increase in experimental rounds and finally reaches a stable value about 70%. When the number of objects is 3, the accuracy also decreases slightly with the increase in experimental rounds and then increases gradually until it is stable at about 53%. We attribute this small fluctuation to the influence of random factors. In the above, we mentioned that objects were randomly placed in the area, and the signal source was randomly selected; a Gaussian noise would also be added when using the log-normal shadowing model to generate the received power at different distances, which would affect the discrimination of objects and then the accuracy. The reason for the decrease in accuracy against the number of objects is that, within the limited area, the increase in the number of objects will also increase the possibility that they have similar distance to the UAV, and thus, the trend of distance versus power change between them tends to be similar.

5.3. System Sensitivity Analysis

Figure 11 shows the sensitivity of the fusion method regarding the change of parameters and the depth distance error obtained by the binocular vision in the log-normal model. is the variance of the noise mentioned above. The variance indicates the stability of the channel or the received signal-to-noise ratio to a certain extent. To explore the influence of variance on the accuracy of the method, we selected three synthetic and one measured values for experimental simulation. Here, the number of objects was set to 3, the distance between objects was 5 m, the number of experiments was 2000, and the number of distance-received power pairs collected for each experiment was 30. The horizontal coordinate was the variance of the distance error for each measurement in square meter. The four curves indicate the detection accuracy of the fusion algorithm when the variance of the shadow fading is set to 5, 10, 15, and 23.48, respectively, while the distance error increases from 0 to 4.0 meters.

When the variance is small, in another word, when the signal-to-noise ratio is high, the discrimination accuracy of the method is high. When is equal to 5, the channel is ideal and the accuracy is close to 1.0. With the gradual increase in variance, the accuracy also begins to decline. When exceeds 15, the accuracy begins to decline significantly. It is preliminarily inferred that the value of 15 should be a critical point of the method. Also, when the distance error increases, the accuracy decreases gradually. The calculation shows that when the distance error is 4, it will be 2 m different from the real distance in extreme cases. When the object spacing is 5 m, the error will greatly reduce the accuracy. In practice, the distance error estimated by binocular vision is around 1 m, so the accuracy of this method is high enough in real-world applications.

5.4. Physical Experiment

We also launched the UAV to the air to verify our system. The experiment was held on a lawn, and the objects were simply 3 umbrellas, each was placed 3 m away from the other. The whole system worked well, when the 3 umbrellas were in the sight of the camera. By controlling the moving path of the UAV, we got some data. Despite those limitations, the accuracy was very close to the simulation result. Figure 12 shows the setup of the experiment and the real-time output of the system.

6. Conclusion

In this paper, we proposed a signal source identification algorithm to distinguish the target by a blockchain and UAV-enabled system. We used binocular cameras to detect candidate objects and corresponding distances. With the measuring of the received signal strength and mobile edge computing, finally, we used fused object identification techniques to determine the real target object. Blockchain and smart contract technologies were adopted to organize UAVs and users, aimed at providing reliability and security. The accuracy of discrimination could reach 70% for 2 objects and 53% for 3 objects in a semiphysical simulation. The real-world experiment showed the feasibility of the system, and the performance was close to that in the semiphysical simulation.

Regarding future directions for this work, first, satellite positioning (such as GPS and BeiDou) can be utilized to give precise location information of the signal source after identification. Second, only one signal source is considered in this paper; the method will be extended to support multiple signal sources in the future work.

Data Availability

Data is available on request; please contact the corresponding author Huaming Lin ([email protected]).

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

Authors’ Contributions

Jian Xiao, Peng Liu, and Huijuan Lu contribute to implementation of wireless signal-aerial image fusion. Huaming Lin and Yi Huang contribute to system architecture design. Hangxiang Fang, Hangguan Shan, and Haoji Hu contribute to object detection and object tracking. Jiayi Xu contributes to binocular depth estimate.

Acknowledgments

This work was supported in part by the Open Project Funding of the Key Laboratory of Electromagnetic Wave Information Technology and Metrology of Zhejiang Province (No. 2020KF0001) and the Natural Science Foundation of China (Grant Nos. 62172134 and 62102125).