Wireless Communications using Embedded MicroprocessorsView this Special Issue
Application of Smart Sensor in Underwater Weak Object Detection and Positioning
This paper is aimed at studying underwater object detection and positioning. Objects are detected and positioned through an underwater scene segmentation-based weak object detection algorithm and underwater positioning technology based on the three-dimensional (3D) omnidirectional magnetic induction smart sensor. The proposed weak object detection involves a predesigned U-shaped network- (U-Net-) architectured image segmentation network, which has been improved before application. The key factor of underwater positioning technology based on 3D omnidirectional magnetic induction is the magnetic induction intensity. The results show that the image-enhanced object detection method improves the accuracy of Yellow Croaker, Goldfish, and Mandarin Fish by 3.2%, 1.5%, and 1.6%, respectively. In terms of sensor positioning technology, under the positioning Signal-to-Noise Ratio (SNR) of 15 dB and 20 dB, the curve trends of actual distance and positioning distance are consistent, while , the two curves deviate greatly. The research conclusions read as follows: an underwater scene segmentation-based weak object detection method is proposed for invalid underwater object samples from poor labeling, which can effectively segment the background from underwater objects, remove the negative impact of invalid samples, and improve the precision of weak object detection. The positioning model based on a 3D coil magnetic induction sensor can obtain more accurate positioning coordinates. The effectiveness of 3D omnidirectional magnetic induction coil underwater positioning technology is verified by simulation experiments.
The continuous development in science and technology is seeing higher intellectualized sensors that are gaining wider applications . Particularly, the combined application of weak object detection , positioning technology, and smart sensors is booming. Meanwhile, the expansion of the global population is demanding more efficient detection and exploitation approaches to marine resources. Once, marine resource exploration, aquatic fishing, and underwater rescue missions mainly rely on diving technology and professional divers, which is costly, inefficient, and, worst of all, risky. Given this situation, the research and application of underwater weak object detection  and positioning technology  based on intelligent sensors might have a cross-era significance. In underwater missions, object detection and positioning  is the preliminary step, and relevant technologies can be used to control and maintain subsequent machinery.
To date, the research on weak object detection algorithms and positioning based on smart sensors  has made further breakthroughs, but the strong object detection algorithm relies on large-scale high-precision datasets for high performance, which is extremely costly. Thus, the weak object detection  algorithm has become a key research direction to reduce high-precision data labeling costs; the weak detection algorithm extracts unlabeled data from the adversarial network, and then, the effective information is used to detect the improvement of network performance. Chen et al.  pointed out that the detection network could detect objects accurately through training datasets with fewer labeled images. Wei et al.  proposed a small and weak object detection method, which outperformed other methods in real-time performance. The experimental results showed that under the worst weather night test conditions, the expected object under various interference could be successfully detected with the accuracy of 0.1 pixels, and the centroid accuracy of the static test could be better than 0.03 pixel. In recent years, positioning algorithms based on Wireless Sensor Networks (WSNs) are getting popular among domestic and international researchers. Researchers have put forward effective solutions for different positioning scenes. For example, researchers propose the Extended Kalman Filter (EKF) method to optimize distance measurement noise, which, however, is very slow and has a high computational cost. Currently, the object detection algorithm has some shortcomings . For example, the underwater image quality is often affected by occlusion and image darkening, among other factors, which makes it impossible to collect data on a large scale and obtain high-precision detections. Thereupon, this study proposes an underwater weak object detection  and positioning method to solve the shortcomings in the current research.
First, according to the research background and current situation, this paper puts forward the research on underwater weak object detection and positioning; then, an in-depth study is conducted on the underwater object through the Deep Learning (DL) object detection algorithm and the underwater positioning technology for three-dimensional (3D) omnidirectional magnetic induction. This paper innovatively combines object detection and positioning technology to study underwater objects, thereby contributing to improving the accuracy of underwater object detection and positioning.
2. Weak Object Detection and Positioning Technological Model
2.1. DL Object Detection Algorithm
2.1.1. U-Shaped Network- (U-Net-) Based Underwater Scene Segmentation
The U-Net image segmentation network is employed for object detection in the underwater scene , and the traditional U-Net model is improved.
Research and design of underwater scene segmentation network model: the U-Net  is structured with layer hopping as the intermediate module, and then, two symmetrical paths are interconnected to each other. One path under this structure is the contraction path, and the other is the expansion path. The layer hopping connection is conducive to high-level and low-level information fusion and can improve segmentation precision. Figure 1 shows the structure and content of U-Net .
The segmentation dataset generated by large numbers of imperfect labeling detection data affects the segmentation precision of the segmentation network, which is also the reason why the traditional U-Net is inaccurate in underwater image segmentation . Here, the underwater scene segmentation is studied based on the imperfectly labeled object detection dataset. The proposed segmentation method can well separate the background from underwater objects, which is advantageous over the traditional U-Net network  in the improvement of the loss function and the optimization of network structure, as explained below. (a) Network structure design: here, the “encoder-decoder” network structure is adopted, which has two components: upsampling and downsampling. The precision of segmented region positioning depends on the upsampling part, while the acquisition of image context information depends on the downsampling part. There are seven convolution units in the upsampling, each unit is composed of the Convolution, Leaky, Rectified Linear Unit (ReLU), and Batch Normalization. Downsampling contains six deconvolution units, each of which consists of Deconvolution, ReLu, and Batch Normalization. The underwater scene segmentation convolution kernel proposed here reduces the parameter space and greatly optimizes the traditional U-Net network . Figure 2 displays the structure of the underwater scene segmentation network.
(b) Loss function optimization: because the traditional loss function  cannot well segment the underwater scene area, the loss function is optimized through
In (1), refers to the image output by the underwater scene segmentation network, represents the segmentation value generated by the imperfectly labeled underwater object detection dataset, denotes the subscript of the pixel in the figure, and stands for the total number of pixels.
The flow of the underwater scene segmentation algorithm is as follows: first, the initial image label information is read, and the segmentation network model is constructed; then, the file is loaded, and the model loss is calculated and updated; finally, whether there are other data are judged. If so, the operation process is iterated. Figure 3 illustrates an underwater scene segmentation algorithm.
Water scene segmentation: the underwater object detection dataset outputs the dataset of the scene segmentation network. Figure 4 presents the value generation process of underwater scene segmentation: Figure 4(a) is an input image, Figure 4(b) indicates the detection data labeled as the underwater object, and Figure 4(c) signifies the segmentation network value. Obviously, the information labeling box of the underwater object can generate the segmented dataset of the object.
(a) Original image
(b) Marked area
(c) Binary segmentation map
Figure 5 fully manifests the results of the underwater scene segmentation network: Figure 5(a) is the input image, Figure 5(b) shows the segmentation result, and Figure 5(c) demonstrates the segmentation result of the underwater scene.
(a) Input image 1
(b) U-Net results
(c) The proposed method results
(d) Input image 2
(e) U-Net results
(f) The proposed method results
(g) Input image 3
(h) U-Net results
(i) The proposed method results
The above underwater scene segmentation using U-shaped network (U-Net) architecture is the basis of weak object detection. Thereupon, this section further elaborates on the weak object detection method for scene segmentation.
2.1.2. Weak Object Detection for Scene Segmentation
The weak object detection adopts the Faster Region-based Convolutional Neural Network (R-CNN) architecture (Figure 6). Two methods are adopted for weak object detection: one is the Mean Fill Method-based underwater weak object detection , and the other is the Candidate Region Optimization Method-based weak underwater object detection. Faster R-CNN  is a popular and widely used object detection algorithm. The network structure of fast R-CNN  consists of two parts: one is the Region Proposal Network (RPN), and the other is the classification and positioning network for candidate regions, which can be used to extract the candidate regions of the image and judge the region category and positioning.
The RPN  network in the Faster R-CNN network can obtain the feature graph of the image through a series of convolution operations to perform window sliding operations on functions. anchors are generated around the center point according to the original image. Feature graph obtained by the shared convolution layer is input into the network as an image that contains the coordinate information of the output anchor area and the probability of the scene. The RPN network in Figure 7 is a complete full convolution network.
Equation (2) is the object loss function of the RPN network. RPN network  can complete the training of positioning and classification.
In (2), represents the object classification loss, refers to the location regression loss, and means the probability that the -th anchor may be the object. denotes the probability that the -th anchor is labeled as the object, stands for the transformation parameters of the positive sample anchor, and signifies the transformation parameters of the positive sample. is the loss coefficient, indicates the network batch size, and refers to the number of anchor positions. Equation (3) means the classification loss function, Equation (4) shows the regression loss function, and Equation (5) demonstrates that is the function of .
Mean Fill Method: the positive and negative samples of object detection are greatly affected by the labeling information. The positive samples represent the object samples to be detected, and the negative samples stand for the samples with backgrounds. Figure 8 illustrates the labeling diagram of true samples and false samples.
(a) Annotated image
(b) True-negative sample
(c) False-negative sample
This study uses the basic architecture of Faster R-CNN and optimizes the traditional Faster R-CNN. Firstly, the background without missing region filling is obtained by segmenting the input images based on the Mean Fill Method. Secondly, the high-level and low-level information is fused. Finally, adjacent interpolation is replaced with the bilinear interpolation method. Figure 9 reveals the network diagram of the weak object detection algorithm for mean filling.
Candidate Region Optimization Method: this method can accurately detect underwater objects while protecting the original image information to the greatest extent. The underwater weak object detection for the candidate region used here is improved on the cardinality of the RPN network. Given the imperfect underwater object detection data, this study optimizes the RPN network in the traditional Faster R-CNN. Figure 10 presents the network diagram of the weak object detection based on the Candidate Region Optimization Method.
The following is a detailed process. (1) First, the background and scene of the underwater image are segmented, then the scene area is labeled as , the true value of the dataset of the detection object is labeled as , and finally, is subtracted from to obtain the missed labeled area . (2) In RPN training, positive samples are labeled with the same method as the traditional method, while negative samples are labeled differently. Negative samples can be removed by controlling the implementation process of negative samples to obtain more accurate positive and negative samples.
The operation process of the object detection algorithm is as follows: first, the initial data are read to construct the Faster R-CNN object detection model; then, the file is loaded, the training data are read, the feature graph is obtained, and the sliding window operation is used to generate anchor; afterward, the model loss is calculated and updated; finally, whether there are other data is judged. If so, the operation is iterated. Figure 11 shows the flowchart of the detection model.
The dataset of this experiment: the dataset used here includes four objects: Yellow Croaker, Carp, Goldfish, and Mandarin Fish, totaling 18,779 underwater images, with 16,866 pieces of training data, 1268 pieces of verification data, and 868 pieces of test data. In Figure 12, (a) represents the underwater image taken by the machine, and (b) is the visualization of labeled images. The disadvantage of this dataset is that the labeling effect is not very good, and there are omissions. The advantage is that the amount of data is relatively large. Figure 12 is a schematic diagram of dataset labeling.
(a) Unlabeled image
(b) Annotated image
The experimental parameter setting: based on the Faster R-CNN  architecture, the weak underwater object detection network is trained. Visual Geometry Group Network (VGG) 16 is a very important classification network. The parameters of the detection network are initialized based on the training on ImageNet  dataset. The starting learning rate of the network is 0.0002, the learning rate is reduced by 10 times with 50,000 iterations, the momentum value of the network is 0.9, and the weight attenuation is 0.0005. Here, Mean Average Precision (mAP) is used to evaluate the experimental results. Specifically, precision, recall, and -measure are feedback indicators of detection performance. Precision and recall are calculated as in
In (6) and (7), TP refers to the number of valid positive samples detected, TN means the number of valid negative samples detected, FN stands for the number of invalid positive samples detected, and FP is the number of invalid negative samples detected.
Equation (8) expresses the integral (AP) of precision to recall. Equation (9) shows the solution of mAP, means the number of object categories, and mAP is the average of AP. The larger the mAP is, the better the detection performance is.
Aiming at object detection in the underwater scene, this section adopts the improved U-Net-architectured image segmentation model. In the improved U-Net architecture, layer hopping is used as the intermediate module, and then, two symmetrical paths are connected. The layer hopping connection is conducive to high-level and low-level information fusion and improves the segmentation accuracy.
2.2. Underwater Positioning Technology for 3D Omnidirectional Magnetic Induction
2.2.1. Algorithm Principle of 3D Magnetic Induction Positioning Technology
The framework of inductive positioning technology: the smart sensor  inductive positioning technology can generate an orthogonal magnetic field using the signal source and determine the specific spatial position of the object according to the different magnetic induction intensities . In Figure 13, coil indicates that the transmitting coil is perpendicular to the subcoil of the -axis, coil represents that the transmitting coil is perpendicular to the subcoil of the -axis, and coil denotes that the transmitting coil is perpendicular to the subcoil of the -axis. Coil indicates that the accepting coil is perpendicular to the subcoil of the -axis, coil denotes that the accepting coil is perpendicular to the subcoil of the -axis, and coil represents that the accepting coil is perpendicular to the subcoil of the -axis. stands for the plane angle between nodes. means the magnetic induction intensity of induction nodes , , and , and is the distance between transmitting and induction nodes. Figure 14 illustrates the underwater positioning diagram based on 3D omnidirectional magnetic induction .
The magnetic induction distribution model of a magnetic dipole: the finite element integral model and the magnetic dipole model can be used to simulate the spatial magnetic field distribution of the coil. The finite element integral model has higher precision, better precision, while the magnetic dipole model has a faster calculation speed, which is in line with the research of underwater scene objects here. In Figure 13, the -axis and -axis determine a plane where the coil is located. At the same time, this plane is perpendicular to the -axis, and the center of the coil passes through the origin of the coordinates. represents the radius of the coil, denotes the distance from the plane determined by the -axis and -axis to the origin, and is the distance from any point in space to the origin. represents the magnetic induction intensity of point , and and are its components on the -axis and -axis. means the magnetic induction intensity of point , and stands for its component on the -axis. Figure 15 shows the spatial position layout of the magnetic induction coil. Figure 13 is a spatial relationship diagram of point and point .
The magnetic induction intensity of any point on the plane determined by -axis and -axis reads
In (10), means permeability, represents coil radius, and denotes the distance from point to the origin. , , and stand for the unit vector, , , and indicate the spatial coordinate of point , and refers to current. The expression of magnetic induction intensity of -turn coil reads
The magnetic induction intensity at any point in space can be expressed as the vector sum on the -, -, and -axes, as expressed in
The magnetic induction intensity of point reads
The spatial relationship between point and point can be expressed by
The relationship between the magnetic induction intensity at different points and the distance of its signal source can be expressed as
The relationship between the magnetic induction component of any point in space and its space vector can be expressed as
Based on Equations (13), (15), and (17), the expression of magnetic induction intensity at any point in space can be expressed as
The expression of magnetic induction intensity of coil at any point in space reads
The relationship between the spatial magnetic induction intensity  of any point in the induction coil and the spatial position of the source coil can be expressed by
In (20), means the included angles between , , and and the coordinate origin to the center of the induction coil. The relationship between them conforms to
In Equation (22), is the coefficient of the subcoil of the 3D source coil.
2.2.2. Positioning Algorithm Based on 3D Magnetic Induction Coil
The algorithm flow is as follows: the magnetic induction intensity of the induction point at any point in space is put into the positioning equation to calculate the coordinates that are then converted into the ultimate coordinates and distance. The specific flowchart of the algorithm is shown in Figure 15.
Here, MATLAB is used to simulate and verify the magnetic field distribution of the magnetic induction coil. Gaussian white noise is added to any spatial point to simulate a real underwater environment, thereby achieving more accurate results. The specific parameters are as follows: the radius of the source coil is set to 10 cm, the number of turns is 200, the current of the energized coil is 100 A, the permeability is , and the resolution is set to 1 cm. After the parameters are set, noise is added, and the simulated magnetic field distribution is drawn using the magnetic dipole model. Here, 20 different sets of coordinate data values are selected randomly, and the coordinate results are calculated through the 3D magnetic induction positioning algorithm. Afterward, the results are compared with the actual coordinates.
3.1. Analysis of Experimental Results of Object Detection
Figure 16(a) suggests that the trend of the Mean Fill Method and Faster R-CNN curve is very similar, but the mAP of Faster R-CNN is always below that of the Mean Fill Method in the iterative process. This shows that when the number of iterations increases, the Mean Fill Method-based underwater weak object detection can well remove the influence of false-negative samples on model implementation and further improve the precision of object detection.
Figure 16(b) implies that the curve trend of the Mean Fill Method is very similar to that of Faster R-CNN and Candidate Region Optimization Method, but the mAP of Mean Fill Method is always below that of the Candidate Region Optimization Method during the iteration process. This indicates that when the number of iterations increases, the underwater weak object detection method based on the Candidate Region Improvement Method can well remove the influence of false-negative samples on model implementation and further improve the precision of object detection.
Figure 16(c) reveals that the curve trend before and after the enhancement of the underwater weak object detection based on the Candidate Region Improvement Method is similar, but the mAP of the image before enhancement is always below that of the image after enhancement in the iterative process. This shows that when the number of iterations of the process increases, the enhanced image can further improve the precision of underwater weak object detection.
After many iterations, the Mean Fill Method-based underwater weak object detection model shows its advantages in terms of detection precision for different objects. In Figure 17, the precision of the four objects under the Mean Fill Method is significantly higher than that of the Faster R-CNN method. Under the Mean Fill Method, the detection precision of Yellow Cracker, Carp, Goldfish, and Mandarin Fish is 65.7%, 69.0%, 57.4%, and 63.1%, respectively. Compared with Faster R-CNN, the Mean Fill Method-based underwater weak object detection algorithm improves the precision of Yellow Croaker, Carp, Goldfish, and Mandarin Fish detection by 6.1%, 7.4%, 3.6%, and 5.7%, respectively.
After many iterations, the Candidate Region Improvement Method-based detection model has shown more advantages in terms of detection precision for different objects. In Figure 18, the precision of the four objects under the Candidate Region Improvement Method is significantly higher than that of the Mean Fill Method. Under the Mean Fill Method, the detection precision of Yellow Cracker, Carp, Goldfish, and Mandarin Fish is 73.0%, 78.0%, 58.2%, and 69.7%, respectively. Compared with the Mean Fill Method, the object detection precision of underwater weak object detection model based on Candidate Region Improvement Method for Yellow Croaker, Carp, Goldfish, and Mandarin Fish has improved by 13.4%, 16.4%, 7.3%, and 12.3%, respectively.
After many iterations, the Candidate Region Improvement Method-based detection model has shown advantages in terms of detection precision after image enhancement. In Figure 19, the detection precision of the Candidate Region Improvement Method for the four objects after image enhancement is significantly higher than that before image enhancement. After the image enhancement, the detection precision of Yellow Cracker, Carp, Goldfish, and Mandarin Fish is 76.2%, 77.9%, 59.7%, and 71.3%, respectively. Compared with that before image enhancement, the precision of underwater weak object detection based on Candidate Region Improvement Method for Yellow Croaker, Goldfish, and Mandarin Fish detection is improved by 3.2%, 1.5%, and 1.6% after image enhancement.
3.2. Simulation Analysis of Random Coordinate Positioning
Further, the distance and related coordinates are calculated based on the measured value of the random spatial position, the 3D magnetic induction intensity of the transmitting coil, and the proposed positioning algorithm. Multiple groups of coordinates are randomly selected and are given a noise with a Signal-to-Noise Ratio (SNR) of 10 dB, 15 dB, and 20 dB, respectively. Under such conditions, the corresponding coordinates are simulated, and the specific results are shown in Figures 20 and 21: Figure 20 is a comparison diagram of the positioning distance and actual distance of the induction coil, and Figure 21 shows a comparison histogram of the actual coordinates and positioning coordinates of the underwater object.
(a) -axis value when SNR is 10 dB
(b) -axis value when SNR is 10 dB
(c) -axis value when SNR is 10 dB
(d) -axis value when SNR is 15 dB
(e) -axis value when SNR is 15 dB
(f) -axis value when SNR is 15 dB
(g) -axis value when SNR is 20 dB
(h) -axis value when SNR is 20 dB
(i) -axis value when SNR is 20 dB
Figure 20 presents that when the SNR is 15 dB and 20 dB, the curve trend of the actual distance and positioning distance is very similar. However, when the SNR is 10 dB, there is a great deviation between the actual distance and positioning distance under the proposed positioning algorithm. Figure 21 suggests that when the positioning precision is within 20 cm, the SNR is 20 dB; when the positioning precision is within 30 cm, the SNR is 15 dB.
When the SNR is 15 dB, a region of 0-1200 cm is designed with an interval of 50 cm, and 19 groups of coordinates with the same , , and components are chosen. Then, the specific coordinate is calculated according to the proposed positioning algorithm, and the deviation between the actual coordinates and the positioning object is observed by gradually increasing the distance. Figure 22 shows the simulation results.
In Figure 22, when the SNR is 15 dB, the positioning coordinates will slowly deviate from the actual coordinate trajectory with the increase of the distance.
With the continuous development of science and technology, sensors are becoming ever more intelligent and are seeing broader applications. As the world population remains at a high expansion speed, the efficient detection and development of marine resources begin to concern both researchers and all nations. Traditionally, marine resource exploration, aquatic fishing, and underwater rescue mainly rely on diving technology and relevant personnel, which was risky, inefficient, and costly. Object detection and positioning technology is an important link in underwater missions, and their applications can be used to control, plan, and use subsequent underwater machinery.
Here, the technology for underwater weak object detection and positioning is studied, which is of great significance to underwater work. Specifically, the problem of object detection and positioning in underwater scenes is analyzed through the underwater scene segmentation-based weak object detection algorithm and the positioning technology for the smart sensor. As a result, the conclusions read as follows: in terms of invalid underwater object samples from poor labeling, an underwater scene segmentation-based weak object detection method is proposed; given imperfectly labeled objects, the proposed method can effectively segment the background from underwater objects, remove the negative effects of invalid samples, and improve the precision of weak object detection. The 3D magnetic induction coil sensor-based positioning model can obtain more accurate positioning coordinates. The effectiveness of 3D omnidirectional magnetic induction coil-based underwater positioning technology is verified by simulation experiments. The limitation of this study is that the real-time performance of the proposed algorithm is not fully considered, so the detection of underwater objects and the real-time detection of positioning speed are the next research direction. The proposed underwater weak object detection and positioning method has very important practical significance; it plays a positive role in the shipbuilding and marine engineering industries; weak object detection and positioning can detect and locate underwater objects autonomously; this technology breaks the traditional operation mode, greatly improves the work efficiency, and plays a certain role in assisting safety production.
The segmentation data used to support the findings of this study are included within the article.
Conflicts of Interest
The author declares that there are no conflicts of interest.
Z. Sang, K. Ke, and I. Manas-Zloczower, “Design strategy for porous composites aimed at pressure sensor application,” Small, vol. 15, no. 45, article 1903487, 2019.View at: Publisher Site | Google Scholar
M. E. Akroush and M. C. Wicks, “Optimal linear filtering for weak target detection based on dyadic contrast function analysis in RFT,” IET Radar, Sonar & Navigation, vol. 14, no. 5, pp. 773–781, 2020.View at: Publisher Site | Google Scholar
Z. Jiang and Y. Fan, “Singularity intensity function analysis of autoregressive spectrum and its application in weak target detection under sea clutter background,” Radio Science, vol. 55, no. 10, pp. 1–8, 2020.View at: Publisher Site | Google Scholar
J. Yu, X. Meng, B. Yan, B. Xu, Q. Fan, and Y. Xie, “Global navigation satellite system-based positioning technology for structural health monitoring: a review,” Structural Control and Health Monitoring, vol. 27, no. 1, 2020.View at: Publisher Site | Google Scholar
J. Xiong, Z. He, R. Lin et al., “Visual positioning technology of picking robots for dynamic litchi clusters with disturbance,” Computers and Electronics in Agriculture, vol. 151, pp. 226–237, 2018.View at: Publisher Site | Google Scholar
K. Zhang, C. Shen, Q. Gao, L. Zheng, H. Wang, and Z. Li, “Ultra wideband positioning technology for accident ships under adverse sea condition,” Journal of Coastal Research, vol. 83, pp. 902–907, 2018.View at: Publisher Site | Google Scholar
H. Zhu, S. Liu, L. Deng, Y. Li, and F. Xiao, “Infrared small target detection via low-rank tensor completion with top-hat regularization,” IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 2, pp. 1004–1016, 2020.View at: Publisher Site | Google Scholar
D. L. Chen, P. Wawrzynski, and Z. H. Lv, “Cyber security in smart cities: a review of deep learning-based applications and case studies,” Sustainable Cities and Society, vol. 66, article 102655, 2021.View at: Publisher Site | Google Scholar
M. S. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1D morphology-based approach in 2D images,” Light: Science & Applications, vol. 7, no. 5, article 18006, 2018.View at: Publisher Site | Google Scholar
C. Pang, S. Liu, and Y. Han, “High-speed target detection algorithm based on sparse Fourier transform,” IEEE Access, vol. 6, pp. 37828–37836, 2018.View at: Publisher Site | Google Scholar
J. Han, K. Liang, B. Zhou, X. Zhu, J. Zhao, and L. Zhao, “Infrared small target detection utilizing the multiscale relative local contrast measure,” IEEE Geoscience and Remote Sensing Letters, vol. 15, no. 4, pp. 612–616, 2018.View at: Publisher Site | Google Scholar
M. Z. Alom, C. Yakopcic, M. Hasan, T. M. Taha, and V. K. Asari, “Recurrent residual U-Net for medical image segmentation,” Journal of Medical Imaging, vol. 6, no. 1, article 014006, 2019.View at: Publisher Site | Google Scholar
N. Ibtehaz and M. S. Rahman, “MultiResUNet : rethinking the U-Net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87, 2020.View at: Publisher Site | Google Scholar
X. Liu, Y. Zhang, H. Jing, L. Wang, and S. Zhao, “Ore image segmentation method using U-Net and Res_Unet convolutional networks,” RSC Advances, vol. 10, no. 16, pp. 9396–9406, 2020.View at: Publisher Site | Google Scholar
S. Ghosh, N. Das, I. Das, and U. Maulik, “Understanding deep learning techniques for image segmentation,” ACM Computing Surveys, vol. 52, no. 4, pp. 1–35, 2019.View at: Publisher Site | Google Scholar
G. Tong, Y. Li, H. Chen, Q. Zhang, and H. Jiang, “Improved U-NET network for pulmonary nodules segmentation,” Optik, vol. 174, pp. 460–469, 2018.View at: Publisher Site | Google Scholar
M. Kolařík, R. Burget, V. Uher, K. Říha, and M. Dutta, “Optimized high resolution 3D dense-U-Net network for brain and spine segmentation,” Applied Sciences, vol. 9, no. 3, p. 404, 2019.View at: Publisher Site | Google Scholar
J. M. Martin-Donas, A. M. Gomez, J. A. Gonzalez, and A. M. Peinado, “A deep learning loss function based on the perceptual evaluation of the speech quality,” IEEE Signal Processing Letters, vol. 25, no. 11, pp. 1680–1684, 2018.View at: Publisher Site | Google Scholar
Y. Qin, L. Bruzzone, C. Gao, and B. Li, “Infrared small target detection based on facet kernel and random walker,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 9, pp. 7104–7118, 2019.View at: Publisher Site | Google Scholar
R. Meng, S. G. Rice, J. Wang, and X. Sun, “A fusion steganographic algorithm based on faster R-CNN,” Computers, Materials & Continua, vol. 55, no. 1, pp. 1–16, 2018.View at: Google Scholar
W. Wu, Y. Yin, X. Wang, and D. Xu, “Face detection with different scales based on faster R-CNN,” IEEE Transactions on Cybernetics, vol. 49, no. 11, pp. 4017–4028, 2019.View at: Publisher Site | Google Scholar
Z. Zhong, L. Sun, and Q. Huo, “An anchor-free region proposal network for Faster R-CNN-based text detection approaches,” International Journal on Document Analysis and Recognition (IJDAR), vol. 22, no. 3, pp. 315–327, 2019.View at: Publisher Site | Google Scholar
C. Wang, J. Shi, X. Yang et al., “Geospatial object detection via deconvolutional region proposal network,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 8, pp. 3014–3027, 2019.View at: Publisher Site | Google Scholar
Q. Guo, L. Liu, W. Xu, Y. Gong, X. Zhang, and W. Jing, “An improved faster R-CNN for high-speed railway dropper detection,” IEEE Access, vol. 8, pp. 105622–105633, 2020.View at: Publisher Site | Google Scholar
Y. Gao and K. M. Mosalam, “PEER Hub ImageNet: a large-scale multiattribute benchmark data set of structural images,” Journal of Structural Engineering, vol. 146, no. 10, article 04020198, 2020.View at: Publisher Site | Google Scholar
N. Ha, K. Xu, G. Ren, A. Mitchell, and J. Z. Ou, “Machine learning-enabled smart sensor systems,” Advanced Intelligent Systems, vol. 2, no. 9, article 2000063, 2020.View at: Publisher Site | Google Scholar
Y. Li, S. Wang, C. Jin, Y. Zhang, and T. Jiang, “A survey of underwater magnetic induction communications: fundamental issues, recent advances, and challenges,” IEEE Communications Surveys & Tutorials, vol. 21, no. 3, pp. 2466–2487, 2019.View at: Publisher Site | Google Scholar
N. Golestani and M. Moghaddam, “Human activity recognition using magnetic induction-based motion signals and deep recurrent neural networks,” Nature Communications, vol. 11, no. 1, pp. 1–11, 2020.View at: Google Scholar
K. Fukuoka, “Consideration of magnetic particle testing using rotating magnetic field of omnidirectional crack in three-dimensional shape portion and evaluation of flaw detection performance,” Journal of the Japan Society of Applied Electromagnetics and Mechanics, vol. 28, no. 2, pp. 63–68, 2020.View at: Publisher Site | Google Scholar
A. Gloppe, R. Hisatomi, Y. Nakata, Y. Nakamura, and K. Usami, “Resonant magnetic induction tomography of a magnetized sphere,” Physical Review Applied, vol. 12, no. 1, article 014061, 2019.View at: Publisher Site | Google Scholar
Y. Chen, C. Tan, and F. Dong, “Combined planar magnetic induction tomography for local detection of intracranial hemorrhage,” IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1–11, 2021.View at: Publisher Site | Google Scholar