Abstract

During the inspection process of power pole towers, manual positioning is mainly used to select the shooting viewpoints of the drone, which leads to erroneous viewpoint selection and inaccurate shootings of inspected objects. Also, neglecting the effect of the sun’s backlight on photographs contributes to poor photo quality that does not meet inspection requirements. Aiming at the selection of shooting viewpoints during multirotor unmanned aerial vehicles’ inspection on power poles, this paper proposes an automatic positioning method that determines the shooting viewpoints by considering UAV performance, airborne camera parameters, and the size of objects to be measured. Considering the factors of sun illumination, we optimize the method to ensure the positions of the viewpoints and to ensure that the images can be clearly generated so that the observers can check the power pole towers through the images when shooting is also taken into consideration. Finally, the automatic calculation method of the related viewpoints is implemented in the Java language. Experiments show that the method can accurately obtain the positions of the drones’ viewpoints and reduce the number of viewpoints, which significantly improves the efficiency and quality of inspection shooting.

1. Introduction

Power transmission is carried out through the transmission lines on power pole towers. It is very important to ensure the safety of power pole towers and the transmission lines. In some areas with complex terrain which may even threaten the personal safety of the inspectors, the cost of manual inspection is very high. In this case, the demand of drones inspecting power poles to replace manual inspections emerged. UAVs (unmanned aerial vehicles) have the advantages of light weight, small size, and flexibility. They can be generally divided into fixed-wing UAVs, multirotor UAVs, and unmanned helicopters. The rotor unmanned aerial vehicles have the characteristics of flexible operation and hovering in the air and is suitable for the inspection work of the power towers [1, 2].

With the increasing maturity of UAV technology, it has been widely used in aerial image measurement, regional topography survey, power tower inspection, and other fields. And, the application of machine learning drones has become more extensive, which involves image recognition, fault diagnosis, information management, intelligent communication, etc. Huang et al. [3] elaborated on the research results of machine learning algorithms in data mining and communication in their works. Fragkos et al. [4] established a management framework for UAV-assisted information systems based on game theory and machine learning. However, in the field of UAV intelligent inspection, there is currently a lack of research studies related to UAV shooting viewpoints.

The high work intensity and low efficiency of conventional manual inspection on the ground led to the fact that it is restricted by many factors. The use of UAV inspection can improve the quality and efficiency of inspection [5, 6]. A UAV patrol inspection system is established for power transmission lines and obtained the images of towers, ground wires, transmission wires, insulators, and transmission line channels through onboard cameras [79]. The UAV obtains the coordinate information of power pole towers and the objects to be measured on pole towers by fetching the point cloud data [10, 11]. In order to achieve intelligent fault detection of high-voltage transmission lines, the automatic extraction of transmission lines and the locations of insulator strings are achieved by image processing and recognition technology [12, 13]. Lei and Sui [14] proposed a deep convolution neural network method based on Faster R-CNN. This method can obtain the category and locations of specific components in the images by continuing to train the datasets from selected areas and check and identify the fault types, which can effectively locate the damaged parts such as insulators in power lines. At present, the position of the viewpoint during drone inspection is manually selected to control the drone’s shooting position and angle [15]. However, the manual fixed point has the disadvantages of low efficiency and many blind spots, so it is difficult to guarantee the safe distance between the UAV and the target. In view of the above problems, a new dynamic target tracking method for UAV is proposed to effectively maintain the safe distance between the UAV and the target [16].

Due to the high voltage of the transmission lines on power pole towers, it will cause electromagnetic interference to the drone and affect its airborne sensors and flight safety [17, 18]. Using an extended Kalman filter with colored noise can predict and correct the electromagnetic interference to the sensors onboard, which ensures flight stability [19]. Alhassan et al. [20] believed that the power supply, automatic obstacle detection [21], and control system of drones are the main significant challenges which have an impact on the drones’ inspection efficiency. The improvement of the UAV obstacle avoidance algorithm will be beneficial to the improvement of UAV’s real-time obstacle avoidance ability [22, 23]. In order to reduce the workload of inspection personnel and improve inspection efficiency, Wang et al. [24] proposed a new inspection method, which included using drones and power tower inspection vehicles together to study drones and using vehicles’ path planning for collaborative inspection. Besides, a detection method of transmission line components in the image processing and analysis also effectively reduces the workload of manual analysis [25]. Nguyen et al. [26] proposed a new vision-based concept of automatic power-line inspection which set UAV inspection as the main inspection method, optical images as the main data source, and deep learning as the basis of data analysis and verification.

In summary, most of the research has focused on the vision-based image recognition, coordinate positioning of UAVs and power poles in point cloud data, and how to avoid obstacles and other aspects. But there is a lack of study on the selection of the inspection shooting position (viewpoint) of an object to be measured on the power pole tower. The patrol inspection of an object to be measured cannot use the regular shooting method because the drone needs to select the best angle and viewpoint to shoot so that the object to be measured can be clearly recorded. The common method is mainly based on the 3D point cloud map, manually selecting the viewpoints of the drone shooting. However, artificial estimation can be biased, which will result in incomplete or unclear images. In addition, during viewpoint manual selection, the influence of environmental lighting factors on the shooting is not taken into account. At present, manually selecting the shooting viewpoints means shooting the objects on the power pole towers one by one, which will end up having too many viewpoints and being less efficient during inspection. Therefore, this paper focuses on researching and solving the problem of automatically determining the inspection viewpoints and shooting angles of multirotor UAVs under the condition of considering the sun illumination and effectively reducing the number of inspection points.

2. Research Methods and Model Representation

2.1. Descriptive Model of the Viewpoint

Viewpoints refer to the spatial positions where the drone is shooting. Generally, seven parameters are used to represent the viewpoint. Among them, there are three spatial parameters that describe every spatial position of the viewpoint (, where , , and , respectively, represent the coordinates in the space rectangular coordinate system), three spatial parameters describing the orientation and posture (, where indicates the pitch angle, indicates the yaw angle, and indicates the roll angle), and one parameter describing the shooting direction . The multirotor UAV would be hovering in the air during actual flight shooting. At this time, the roll angle and pitch angle are basically 0 degrees, and only the yaw angle exists. Therefore, the viewpoint of a multirotor UAV can be defined as , as shown in Figure 1.

2.2. Method of Determining Viewpoints Based on Safety Distance
2.2.1. Establish the Minimum Safety Distance Constraints for Drone Inspection

Due to the high voltage of the transmission lines, there is an electromagnetic field within a certain range around the conductor, and the electromagnetic field will interfere with the electronic equipment onboard and affect the operational performance of the drone, which may seriously cause the crash of the drone and destroy the power grid. Therefore, when the drone inspects the power towers, it should maintain a certain safety distance from power towers. The constraints are as follows:where L represents the distance from the UAV to the nearest wire of the power pole tower and Ls is the safety distance of the voltage guide line, as shown in Table 1.

2.2.2. Establish the Functional Relationship between the Size of the Components to Be Measured and the Parameters of the Drone Camera

The size of a component to be measured is called the field of view. It is determined by the distance between the shooting point and the component to be measured, the focal length and the imaging size of the lens, as shown in Figure 2.

The equation for calculating the focal length F iswhere F is the focal length of the lens, indicates the shooting distance between the viewpoint and the component to be measured, is the imaging width of the component on the ccd target surface, is the width of the component, is the imaging height of the component under on the ccd target surface, and indicates the height of the component to be measured. The specifications of the conventional ccd target surface are listed in Table 2.

2.2.3. Design the Algorithm of the Viewpoint L and Focal Length F of Each Component to Be Measured during Drone Inspection

UAV airborne cameras are divided into fixed-focus lenses and zoom lenses. Fixed-focus lenses need to maintain an appropriate distance in order to ensure the clarity of photos. The focal length of zoom lenses is flexible faced with different shooting distances to ensure the clarity of the photo. For a fixed-focus lenses’ camera, the distance L between the component under test and the viewpoint can be calculated by the following equation:

For zoom lenses, in order to ensure that the imaging size of the tested component is as large as possible for inspection, the maximum focal length is used. Then, the distance for each shooting viewpoint is given by the following equation:

However, since the shooting distance must meet the constraint requirement of L > Ls. The actual shooting focal length can be adjusted along with equation (2), given by (4):

3. Reduction of the Viewpoint of the Drone

At present, the drone's inspection tasks take one photo of each component. When the positions of the components are close, the camera’s fields of view can be overlapped, leading to repeated shooting and inspection efficiency reduction. Therefore, this section will study how to reduce the number of viewpoints.

3.1. Research on Reduction Methods

The space coordinate system is established by taking the center of the power pole tower foundation as the coordinate origin and assuming that the space coordinates of an object to be measured are . The width is , and the height is . The horizontal error of the multirotor UAV is , and the vertical error is .

3.1.1. Method 1

Determine whether other objects are within the shooting range when shooting an object.

Due to the hovering error of the multirotor drone, the actual shooting range of the drone is

① Airborne camera zoom: under the condition of satisfying the initial minimum safety distance , in order to achieve full-frame shooting of the object, according to equation (2), can be calculated as

Supposing that the variable range of the camera’s focal length is , if , the camera cannot capture the scope of width times height . The maximum range that the drone can shoot at this time is adjusted to

If , the range of drone shooting is

If , the minimum range that the drone can shoot is adjusted to

② Fixed focus of the airborne camera: supposing that the focal length of the camera is , the shooting distance between the drone and the object to be measured at this time can be calculated by

If , the drone does not meet the requirement that the shooting distance should be larger than the minimum safety distance, so the shooting distance of the drone needs to be increased to obtain a new shooting range of the drone:

If , then the shooting range of the drone is

3.1.2. Method 2

Firstly, a reasonable shooting range will be determined according to the shooting performance of the camera. Then, which objects are included in this range would be distinguished.

① Airborne camera with zoom lenses: since the UAV has a certain shooting distance during the inspection and all objects need to be photographed clearly and accurately, it is preferred to use a long focal length for the inspection and shooting.

According to equation (2), the shooting range at the maximum focal length can be obtained:

The viewpoints corresponding to the components within this shooting range will be reduced.

② Airborne camera with fixed lenses: similarly, in order to shoot the object to be measured as clearly as possible, taking the minimum safety distance as the shooting distance, the shooting range is calculated according to the following equation:

The viewpoints corresponding to the components within this shooting range will be reduced.

3.2. The Rules of Viewpoint Reduction When Shooting Regardless of the Shooting Surface

The components to be measured do not have a fixed shooting surface. For example, when inspecting the insulator string, there is no need to consider the shooting angles and directions. Therefore, the viewpoints corresponding to all the objects to be measured falling within the shooting range can be reduced and merged.(1)Method 1: when the drone shoots the component A to be measured during inspection, depending on whether the focal length of the camera mounted on the drone changes, the area or would be taken as a reduceable shooting range. When multiple objects are within the range, these objects are photographed together, and the judgment conditions for whether some object to be measured is within the range are as follows:where and , respectively, represent the space coordinate values of the object , and indicate the width and height of the object, respectively. Depending on whether the focal length of the camera is zoom, . When calculating the positions of the viewpoints, this shooting range needs to be treated as a whole, calculated with the central coordinate of the whole, and remove the other objects within the shooting range.(2)Method 2: the area or would be taken as the range of multiple objects together to be measured. For any object , the actual size after considering the influence of the drone can be calculated by equation (6) as , and the judgment conditions for whether other objects to be measured are within the shooting range are as follows:where , , , and , respectively, represent the coordinate values of the object . , , , and , respectively, indicate the width and height of the object to be measured. The shooting ranges are represented by times , depending on whether the camera’s focal length is zoom, . Taking this shooting range as a whole, the coordinates of the center point of the range are utilized to calculate the viewpoint positions of the drone.

3.3. The Viewpoint Reduction Rules of the Viewpoint When the Shooting Surfaces Are on the Same Plane

Some objects to be measured need to be photographed from a specific angle, such as the hanging point of the conductor end or the cross-arm end of the insulator string. There are certain regular patterns among the fixed shooting surface of some objects to be measured, the direction of the power pole tower, and the direction of the wire. Therefore, the plane of the object can be determined by the direction of the tower and the direction of the wire. Taking the cross section of the hanging point of the cross arm of the insulator string as an example, the plane equation is determined by three points in the space connected to the hanging point of the cross arm which is not in the same plane.

Assuming that the coordinates of the hanging point of the horizontal arm are known as , the coordinates of the hanging point of the wire end are , and the coordinates of the hanging point of the wire end of the adjacent tower connected to it are . The plane equation of the hanging point at the cross arm can be calculated aswhere the values of , , and are determined by the coordinate values of different objects to be measured:

If there is a fixed shooting surface for the object to be measured and the shooting surface is not on the same plane as the shooting surfaces of other objects, the merge processing cannot be performed. Therefore, the information about shooting surfaces of different objects to be measured should be obtained before evaluation. If the shooting surfaces of the two objects to be measured are different, one of the objects should be removed in the evaluation process. By substituting the coordinate values of other objects to be measured into equations (18)–(21), it can be judged whether they are on the same plane as the object being photographed.(1)Method 1: when there are multiple objects with a fixed shooting surface and the shooting surfaces are on the same plane, whether other objects with the same shooting surface are within the shooting range of the object can be determined by taking a certain object as the shooting center. Assuming that the object to be measured has a fixed shooting surface, whether other objects to be tested are within this shooting range can be judged by equation (16).(2)Method 2: the shooting range is known. For multiple coplanar objects with fixed shooting surfaces, the processing method is the same with Method 1. Whether the multiple objects are within the shooting range can be judged by equation (17).

3.4. The Viewpoint Reduction Rule When the Shooting Surfaces Are on Different Planes

There also exist scenarios that multiple objects to be measured have fixed shooting surfaces, and the shooting surfaces are not on the same plane. For example, the directions of the hanging points of two different insulator strings on the same side are different. Therefore, if inspectors want to shoot the two hanging points at the same time, we need to find the appropriate angle to ensure that we can shoot the fixed shooting surface of the two hanging points at the same time.

3.4.1. Method 1

When shooting object with a fixed shooting surface, the shooting range could be determined firstly, which can be known as according to equation (6). Secondly, whether other objects to be tested are within the shooting range needs to be judged according to equation (16).

If the object satisfies the above equation, its coordinates will be substituted into equation (18) to determine whether it is in the same plane as the object to be measured. When the shooting planes between the objects to be measured are in the same plane, reduction can be done according to the method introduced in Section 2.3. When the shooting planes between objects are not the same, their own plane equations need to be determined first.

Supposing that the space coordinates of the object to be measured are , the width is and the height is . Also, the space coordinates of the object to be measured are , the width is , and the height is . According to equation (18), the plane equation of each plane can be calculated, and then, the equation of the normal of the two planes passing through the coordinate points of the object to be measured can be calculated:

According to the experience of drone inspection and shooting, when the object shooting surfaces are not in the same plane and the angle between the shooting surfaces is less than 30°, the drone cannot clearly shoot two objects to be measured at the same time. The angle between the planes can be calculated from the normal of the two shooting surfaces:

When the angle between the planes is less than 30°, the two objects to be measured cannot be shot at the same time.

In order to ensure that the drone can shoot two objects to be measured at the same time, the viewpoint of the drone is moved to a position, and the angles between which and the normal corresponding to the two object planes are equal. Meanwhile, the distance between the drone and the two objects to be measured needs to be guaranteed to be greater than the minimum safety distance. Therefore, the position of the viewpoint after the drone moves can be calculated by the following equation:where .

3.4.2. Method 2

The shooting range should be determined firstly, , according to Method 2 in Section 2.1. Secondly, whether any two objects to be measured on the power tower are within this range is judged by equation (17).

If the two objects to be measured are within the shooting range, their coordinates are substituted into equation (18) to determine whether the two objects are on the same plane. When the shooting surfaces of the two objects to be measured are in the same plane, the corresponding viewpoints can be merged and reduced according to the second method in Section 2.3. Otherwise, the plane equation of the object to be measured is calculated according to equation (18), and the normal’s equation of each plane is obtained by equation (23). Equation (24) calculates the angle between the planes. If the angle between the planes is greater than 30°, the position of the viewpoint after the UAV is shifted could be calculated by equation (25). Otherwise, the two objects to be measured cannot be shot together.

Considering the efficiency of the proposed reduction process, this paper designs two simplification methods. Method 1 only judges whether there are other whole components to be measured within the shooting range. If it is true, all components within the field of view will be uniformly shot and merged into one viewpoint. This method would fail to reduce when the field of view does not contain any components or only contain partial components. In order to further optimize the number of viewpoints, this paper simplifies the number of viewpoints by adjusting the position and range of the field of view through Method 2 on the premise of ensuring the shooting accuracy. In practice, workers need to choose between or combine two kinds of viewpoint reduction methods according to actual engineering needs.

3.5. Algorithm Complexity Analysis

This paper sets the number of components to be measured as n. The operation process of the algorithm of Method 1 is followed as follows: firstly, when shooting component A, the algorithm traverses other n − 1 components. After calculating, it can be found that totally there are m components within the same shooting range as component A, which needs to be run for n − 1 times. Then, it needs to execute m times to judge whether there is a shooting surface for each of m components. If there is no shooting surface, the viewpoint will be output directly. Otherwise, it is necessary to run m − 1 times to determine whether the shooting surfaces of m components are coplanar. Therefore, the complexity of the algorithm in Method 1 is .

The operation process of the algorithm Method 2 is as follows: firstly, the shooting range is calculated according to the parameters of the airborne camera. For every component in n components, it calculates m components within the same shooting range, which needs to be executed for times. It needs to be executed m times to judge whether there is one shooting surface for the m components. If there is no shooting surface, the view point will be output directly. Otherwise, it shall be performed to determine whether the shooting surfaces of m components are coplanar, which needs to be executed for times. Therefore, the complexity of the algorithm in reduction Method 2 is .

To summarize, Method 1 has low complexity and high efficiency, but the reduceable number of viewpoints is limited. Method 2 has high complexity and low efficiency, but the simplification ability of viewpoints is better. The automatic determination of viewpoints can effectively solve the image quality problem of incomplete components. At the same time, the viewpoint reduction algorithm can reduce the number of UAV inspection viewpoints, increase the number of shot components for a single shot, and reduce the number of shooting times, which help improve the inspection efficiency of power towers and save cost.

4. Optimization of the Viewpoint Position Based on the Sun’s Trajectory

When shooting an object, usually the camera is selected for forward light shooting so that the object imaging will be clearer. In order to ensure that the imaging is clear, it is necessary to consider the illumination angle, and the imaging clarity of the forward light shooting is greater than the backlight shooting. Therefore, it is critical to consider the sun's trajectory to optimize the position of the viewpoint in the previous step.

4.1. Determination of the Sun’s Position and Angle

The sun’s trajectory can be determined according to the four parameters of the solar declination angle , solar hour angle , azimuth angle , and altitude angle [27], as shown in Figure 3.

The figure above shows the altitude angle of the sun and the azimuth angle of the sun .

4.2. The Calculation of the Optimal Shooting Position of the Object to Be Measured without a Fixed Shooting Surface

The sunlight is taken as a factor to the determination of the viewpoint of the drone to optimize the viewpoint. Assuming that the spatial coordinate of the component to be measured is (0, 0, 0). The UAV shooting distance , accumulated day number , observation point time , observation point latitude , declination angle , solar hour angle , solar altitude angle , and azimuth angle are given. The equations for calculating the coordinates of the viewpoint of the UAV are given by equation (26) The position of the viewpoint is shown in Figure 4:

4.3. Calculation of the Optimal Shooting Position of the Object to Be Measured with a Fixed Shooting Surface

For some noncylindrical objects, such as the power-pole number plate, inspectors chose to use frontal shooting to obtain the attribute information on that. Meanwhile, the influence of light needs to be considered under the premise of frontal shooting instead of prioritizing forward light shooting. For the object to be measured with a fixed shooting surface, firstly the viewpoint of the drone that meets the inspection requirements is calculated. In order to consider the influence of lighting factors, on the basis of ensuring that the fixed surface is photographed, as the middle point is shifted to the left and right by 30° to widen the range of the viewpoint. The shifted viewpoints are represented by and . Then, , the viewpoint calculated based on the sunlight in (3), will be compared with , , and to obtain the final UAV viewpoint position given by the following equation:

5. Design and Implementation of the Algorithm for Automatic Determination of Shooting Viewpoints

The algorithm flow chart in this paper is shown in the following Figure 5.

6. Test Verification

The automatic viewpoint determination algorithm takes several factors into account such as the hovering accuracy and external illumination of the multirotor UAV to calculate the optimal viewpoints. The aforementioned algorithm is implemented on the computer via the Java language to realize the visual interface running program. The variable parameters that can be configured by the program include accumulated day numbers, observation time, observation point latitude, sizes of the components to be measured, camera focal length, and lens’ target surface size. Also, the program can automatically read the current date and time of the UAV system. Finally, according to the aforementioned parameters, the program will automatically calculate the corresponding viewpoints of the UAV and reduce the obtained viewpoint set through the viewpoint reduction rules.

For example, a glass-shaped power tower is being inspected. The height of the tower is 50 m. The three-dimensional point cloud map of the tower is shown in Figure 6. The vertical hovering error of the drone is 0.5 meter, and the horizontal hovering error is 1.5 meters. The power tower is located at latitude 33° North, and the relative spatial position coordinate of the tower foot is set as (0, 0, 0).

In order to verify the sensitivity and validity of the algorithm, the following comparison experiments are set for comparison. It includes algorithm comparison, comparison of lighting effects, comparison of reduction of the number of viewpoints, and comparison of consider drone hovering error or not. All experiments used the same multirotor UAV. The experimental results are shown in Table 3.

As can be seen from Table 3, the following conclusions can be obtained:(1)The comparison between the first five experiments and Experiment 6 shows that the images shot at the inspection points of the drones determined by hand engineering did not capture the complete power pole tower, and the tower’s head part is not photographed. Hence, workers cannot evaluate whether the head is intact. In Experiment 5, via the method of this paper, the full view of the power pole tower can be captured, and the maximum imaging size of the power pole towers is guaranteed.(2)Compared with Experiment 3, in Experiment 1 and Experiment 2, the ends of the conductors connecting different insulators and the connection part between the conductors can be photographed together. Four different parts to be measured are uniformly photographed, and the original four viewpoints are reduced to one viewpoint, which enhanced the inspection efficiency. Experiment 3 only photographed part of the insulator string connection, and there is a phenomenon of incomplete shooting.(3)Comparison between Experiment 1 and Experiment 2 shows that Experiment 1 takes into account the factors of sunlight and optimizes the position of the viewpoint so that the components to be measured can be completely and clearly photographed; Experiment 2 did not take the sunlight into account. The overall brightness of the image is too high, so the recognition of the components in the picture is not straightforward, which will interfere with the evaluation of whether the components on the power pole towers are damaged.(4)The comparison between Experiment 3 and Experiment 4 shows that Experiment 3 takes the hovering error of the multirotor UAV into account, and the insulator string can be shot completely. Experiment 4 did not consider the hovering error, resulting in incomplete shooting of the insulator string, and the reason for this phenomenon is caused by the position deviation of the drone during hovering.

Therefore, the algorithm proposed in this paper can achieve full-frame shooting of the components to be measured under lighting conditions suitable for shooting. Also, it can effectively reduce the number of viewpoints and improve the inspection efficiency on the premise of finishing the inspection task.

According to the requirements for components to be measured in the inspection instruction manual of power towers, the spatial coordinate values of all components to be measured on towers on the 84 projection coordinate system can be obtained from the point cloud map, as shown in Table 4.

Firstly, regardless of the reduction of viewpoints, the viewpoint automatic determination algorithm is used to calculate the drone’s viewpoint coordinates of each component to be measured. The results are shown in Table 5.

Then, the number of viewpoints of the multirotor UAV is reduced through the viewpoint reduction algorithm in the viewpoint automatic determination algorithm. So, the simplified viewpoint coordinates are obtained, as shown in Table 6.

It can be seen from Table 6 that according to the viewpoint reduction rules, the three viewpoints corresponding to the hanging point of the left-phase insulator string wire end, left-phase insulator string cross arm, and left-phase insulator string are reduced to one viewpoint. The viewpoints in the middle phase and right phase are reduced in the same way. The number of inspection viewpoints of the multirotor UAV is simplified from 15 to 9 through viewpoint reduction.

7. Summary

This paper focuses on the problems that may occur when manually selecting the viewpoints of the power pole towers, such as failing to capture the whole picture of the power tower, the backlight shooting, and the excessive viewpoints. It considers the structure of the power towers, the locations and sizes of the components to be measured, the flight performance of the aircraft, and the specifications of the onboard camera. According to the actual requirements of the inspection, the algorithm for UAV viewpoint determination based on space geometry is proposed. Firstly, according to the requirements of power tower inspection, this paper calculates the viewpoints’ positions of the multirotor UAV based on the space geometry. The situation exists that one component to be measured corresponds to one drone viewpoint resulting in excessive viewpoints. To overcome this, the corresponding reduction rules are given depending on different positions of the objects to be measured on the power pole towers. And, the objects to be measured that meet the reduction rules are photographed together to reduce the number of viewpoints. At the same time, in order to ensure that the objects to be measured photographed by the drone can be clearly imaged, the impacts of the sunlight at different times on the shooting are considered. Also, the positions of the drone’s viewpoints are optimized and adjusted. Then, the integrity of the drone when shooting the object along the light is guaranteed. The images are clear, which achieves the purpose of optimizing the viewpoints’ positions. In order to improve the calculation efficiency of the algorithm, the computer visual operation interface is built via Java, and the optimal inspection point position of the UAV can be quickly calculated by inputting the parameters required for inspection. Finally, this paper conducts the simulation experiment on a specific power tower, which is photographed by the designed algorithm to calculate the positions of the viewpoints. During the process, the influence of the sun illumination is considered, while ensuring the whole object is shot so that the problems of fuzzy images caused by the influence of the light during the shooting are solved, and the inspection efficiency of power poles has been greatly enhanced.

Data Availability

The experimental code used to support the findings of this study is available from the corresponding author upon request.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

Authors’ Contributions

HZ guided the content and structure of the paper, HR L designed the algorithm in the paper and wrote the paper, HX W and ZH carried out experiments and gave the experimental results, and MD F proofread and retouched the language. All authors reviewed the manuscript.