Table of Contents Author Guidelines Submit a Manuscript
International Journal of Aerospace Engineering
Volume 2019, Article ID 6708450, 18 pages
https://doi.org/10.1155/2019/6708450
Research Article

Store Separation: Photogrammetric Solution for the Static Ejection Test

1Instituto Tecnológico de Aeronáutica (ITA), Brazil
2Instituto Nacional de Pesquisas Espaciais (INPE), Brazil
3Instituto de Pesquisas e Ensaios em Voo (IPEV), Brazil
4Instituto de Estudos Avançados (IEAV), Brazil
5Instituto de Aeronáutica e Espaço (IAE), Brazil

Correspondence should be addressed to Luiz Eduardo Guarino de Vasconcelos; moc.liamg@oniraug.ud

Received 1 June 2018; Accepted 2 October 2018; Published 13 January 2019

Academic Editor: Antonio Concilio

Copyright © 2019 Luiz Eduardo Guarino de Vasconcelos et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The process of developing and certifying aircraft and aeronautical systems requires the execution of experimental flight test campaigns to determine the actual characteristics of the system being developed and/or validated. In this process, there are many campaigns that are inherently dangerous, such as the store separation. In this particular case, the greatest risk is the collision of the store with the fuselage of the aircraft. To mitigate the risks of this campaign, it is necessary to compare the actual trajectory of a separation with its simulated estimates. With such information, it is possible to decide whether the next store release can be done with the required safety and/or whether the model used to estimate the separation trajectory is valid or not. Consequently, exact determination of the trajectory of the separation is necessary. Store separation is a strategic, relevant, and complex process for all nations. The two main techniques for determining the quantitative store trajectory data with 6DoF (six degrees of freedom) are photogrammetry and instrumented telemetry packages (data obtained from inertial sensors that are installed in the store). Each presents advantages and disadvantages. In regard to photogrammetry, several market solutions can be used to perform these tests. However, the result of the separation trajectory is only obtained after the test flight, and therefore, it is not possible to safely carry out more than one on the same flight. In this context, the development and validation of a solution that will allow the realization of near real-time separation analysis are in fact an innovative and original work. This paper discusses the development and validation, through actual static ejection tests, of the components that will compose a new onboard optical trajectory system for use in store separation campaigns. This solution includes the implementation of a three-dimensional (3D) calibration field that allows calibration of the optical assembly with just one photo per optical assembly, development of a complete analytical model for camera calibration, and development of specific software for identification and tracking of targets in two-dimensional (2D) coordinate images and three-dimensional (3D) coordinate trajectory calculation. In relation to the calibration, the analytical model is based on a pinhole type camera and considers its intrinsic parameters. This allowed for a mean square error smaller than ±3.9 pixels @1σ. The 3D analysis software for 6DoF trajectory expression was developed using photogrammetry techniques and absolute orientation. The uncertainty associated with the position measurement of each of the markers varies from ±0.02 mm to ±8.00 mm @1σ, depending on the geometry of the viewing angles. The experiments were carried out at IPEV (Flight Test Research Institute)/Brazil, and the results were considered satisfactory. We advocate that the knowledge gained through this research contributes to the development of new methods that permit almost real-time analysis in store separation tests.

1. Introduction

Store separation is a strategic and complex process for all nations [1]. Aircraft-store separation is an old concept; however, the calculation of the store trajectory accuracy during the separation from an aircraft is more recent [2]. Prior to 1960, there were practically no widely used or generally accepted methods for preflight prediction of store separation trajectories other than wind tunnel testing techniques.

In the past 100 years, there has been a considerable advance in this area of engineering [3]. From launching grenades from open cockpits, smart bombs can nowadays be launched or dropped from unmanned aerial vehicles. Prediction capabilities have also shown significant improvements. Instead of the “hit-or-miss” method, there has been an integrated process for decades that includes computational fluid dynamics (CFD), wind tunnel, trajectory simulations, and static ejection tests and store separation flight tests.

With the advent of the modern high-speed attack aircraft or combat jets, there has been an increasing need to carry more and more stores and release them at ever-increasing speeds [2].

In a military aircraft, the release of each external store (i.e., ammunition, external fuel tank, capsules, bombs, missiles, among others) must be well planned and carried out safely [4]. In order to predict the store trajectories, simulation models with 6DoF (six degrees of freedom) are usually employed. These describe the movement of the aircraft and the store in relation to one another, as well as the inertial system.

In some situations, the flight test engineer estimates the effort required to obtain an airworthiness certification for the aircraft-store configuration [5]. These are (1)development and certification of new aircraft and defense systems(2)technological updating of aircraft and military systems(3)integration of certified weapons in a certified aircraft

In the development of any type of store, the static ejection test (also known as pitch drop tests) is a fundamental part of the store separation campaign [3]. These tests are performed to establish the aircraft and store configurations required for flight testing.

There are two ways of determining the store trajectory data during the separation event, whether in flight or on the ground [5, 6], which are (1)photogrammetry using two or more fixed-orientation cameras(2)inertial navigation using sensors installed in the store (i.e., telemetry kit)

In general, both methods present advantages and disadvantages [68].

In this paper, the development and validation of a photogrammetric solution will be presented, and the static ejection tests carried out at IPEV. This paper is an extension of this conference paper [9]. This solution is part of a program that is under development that aims to allow analysis of the trajectories in near real time. The work carried out includes (1)the implementation of a three-dimensional (3D) visual field that will be used in the calibration of the optical assembly with only one photo(2)the development and validation of the complete analytical model to minimize the systematic distortions of the optical assembly(3)development and validation through static ejection tests of specific software for identification and tracking of targets in 2D images and for calculating the trajectory of 3D targets

2. Store Separation

Store separation analysis is defined as the determination of the position and attitude histories of a store after it is deliberately detached or ejected from the aircraft while the store is still under the nonuniform aerodynamic interference existing around the aircraft [4].

Store separation analysis is necessary to redesign the operational store limit [10], in order to analyze the store separation characteristics for various flight conditions. The realization of this campaign is for the experimental verification that the separation will not jeopardize the structural integrity of the aircraft, if this occurs within the operational limits (i.e., envelope) provided for such an operation [10].

The store separation envelope (Figure 1) encompasses the scope of the altitude, speed, and pitch angle that allow the operational use of the stores [11]. The envelope border is the final point at which a store can safely be ejected from an aircraft.

Figure 1: Example of the spatial volume considered in the field of view of the camera.

Store separation tests ensure that the stores released from an aircraft can safely pass through the aerodynamic disturbance of the aircraft without affecting the aircraft or other stores released simultaneously, which may cause damage to the aircraft or prematurely detonate the stores [11].

The purpose of the store separation test is to collect sufficient data to ensure an acceptable separation in terms of safety [12].

3. Photogrammetry

Photogrammetry is the science of making accurate measurements from photographs [13]. This technique allows the collection of quantitative data from cameras mounted on the aircraft. Quantitative data is essential to validate the models developed in the store separation tests so that they can be used to improve the accuracy of store separation predictions. High sampling rate video cameras (i.e., greater than 60 frames per second), also known as high-speed cameras, are commonly used in such tests.

The main advantages of photogrammetry in relation to the use of an inertial system are the following [68]: (1)Improved accuracy in determining the positions of markers and angles(2)The measurement system shall not be discarded at each separation(3)Measures are taken in the spatial domain; thus, the resulting positions and angles are very accurate(4)This method produces a better displacement measurement, as compared to the inertial system(5)It presents redundancy for the case of more than one camera recording the same test point and overlap of the field of view (i.e., a larger number of 2D observations of the markers are made to each frame), making the solution of the trajectory an overdetermined problem (i.e., at least 3 markers of each camera are required to compute a 6DoF solution)(6)Reliable and fault-tolerant because if a camera’s measurements are poor or if there is an error in its calibration, problems with that camera’s data will be evident and should be corrected or discarded

In comparison with other measurement methods, photogrammetry is advantageous in that it measures positioning parameters of high-speed objects in adverse measurement environments, due to the rapid frequency response, without making contact with the object, besides being a high precision method [1416].

The main disadvantages of this method in relation to the inertial solution are the following [68]: (1)Requires a meticulous preparation of the aircraft and store, which could require up to two additional working days for the installation and measurement of all reference marks(2)The calibration process for each optical assembly (camera lens and pod plexiglass windows) is very complex, because it requires a dedicated laboratory, with reference marks in known positions, and the availability of a parameter identification tool to estimate the systematic error minimization coefficients and its associated uncertainty(3)The measurements of the secondary velocity and acceleration parameters, which are obtained by means of the position derivatives, are more susceptible to small perturbations due to image noise(4)The photogrammetric computational solution deals with massive data (e.g., 10.67 GB/s sustained stream, for a 1024 × 1024 pixels × 24 bits considering RGB image frame at 400 fps). Therefore, such method is much more complex and time-consuming as compared to the inertial system (e.g., 128 KB/s data stream for up to 20 parameters × 16 bits word set sampled at 400 sps–samples per second)(5)It is dependent on environmental conditions(6)It is more inaccurate when there is noise in the image(7)Processing is interrupted when store exceeds the analysis volume

The use of photogrammetry in store separation tests offers significant and unique technical and managerial challenges. When designing the algorithms, several factors must be considered such as the camera angle, camera movement, image quality, focal length, lens distortions, and environmental conditions. Most of the time, the vapors appear on the images, which may decrease the visibility or even obscure the store. A camera can be affected by engine fluid in its lens, causing the camera to malfunction or stop completely. A photographic pod (Figure 2) is one of the resources that helps minimize these problems, since it allows the installation of cameras in its interior. The usage of a pod minimizes some problems and solves others; however, special attention must be paid to the temperature (heat) generated by the cameras, which can lead to the condensation of the capsule. Another important point in adopting the pod is the calibration of the optical assembly. Ideally, the cameras should first be installed and configured within the pod, and only after the calibration of the cameras carried out based on the optical assembly (camera + lens + pod window).

Figure 2: Cameras installed and positioned: (a) front camera, (b) rear camera, and (c) photographic pod with 2 cameras.

In addition to the challenges previously cited, store separation tests occur in environments hostile to accurate measurements [17]. The light condition is a major challenge in the use of high-speed cameras. The store still under the wing may be under shading effect and upon release may be under intense sunlight. There can also be a variety of backgrounds after the release of the store, such as white cloud, navy blue, forest green, and snow white. The chosen solution should overcome the issues related to illumination and background diversity (Figure 3).

Figure 3: Example of some scenarios: (a) high luminous intensity, (b) low illumination, and (c) sky and earth.

Without photogrammetry, the flight test is qualitative in nature and consists of several flights approaching the edge of the separation envelope in small incremental steps.

The main objectives of the IPEV photogrammetric solution are the following: (1)Obtain data of the trajectory of the store in relation to the aircraft(2)Define the ideal configuration of the optical assembly to capture the event(3)Perform data reduction accurately and quickly(4)Provide a customer review report

4. Static Ejection Test

NATO [17] is focused on compatibility, integration, and store separation tests that have to be performed during the integration of an existing or newly developed store for military aircraft (new or existing).

Physical interface evaluation and electrical tests are the two fundamental trials to perform static ejection tests. Details of the recommended procedures for the execution and validation of these tests are depicted in [17].

The purpose of the static ejection tests is to determine the store’s reactive force, the separation characteristics of the stores released from the aircraft’s release unit (ejector release unit-ERU), and the reliability of the weapon system [12]. Such features will include accelerations, angular velocities, and store attitudes (i.e., roll, pitch, and yaw, respectively) when leaving the influence of the ERU, the reactions transmitted to the aircraft structure by the ERU, and the dynamics transmitted to the store by the ejection mechanism.

In order to perform this type of test, IPEV developed a method that involves the following steps: planning, preparation, geometry determination, camera calibration, performance of test points, 2D analysis, and 3D analysis with 6DoF.

5. Planning

As previously stated, photogrammetric testing offers significant and unique technical and managerial challenges. At this stage, the planning of the whole test campaign is carried out, and the following activities and information are defined: (i) The test period. (ii) The test participating teams: typically, the participating teams are those from the imaging sector, aircraft maintenance sector, technical support sector, and instrumentation, calibration, and topographic sectors. (iii) The equipment to be used (aircraft, cameras, total station, luxmeter, trigger, damping tires (Figure 4), power sources, computers, forklift, among others). (iv) The markers are defined: the label of a typical marker is a circle or square of 4 or 6 inches with a bow tie feature (Figure 5). Additional information is placed next to each marker to facilitate its identification. This identification is composed of a letter and a number (Figure 5). This identification facilitates the possible postprocessing of the data. At IPEV, 20 or more markers are glued to a predetermined store pattern on the store, in the pylon, and in the aircraft to allow a more accurate photogrammetric analysis of the store position. To this end, an algorithm that estimates the pattern position of the markers in the store, pylon, and aircraft was developed, in addition to the initial estimate of the camera position (Figure 6). (v) Scenario sketch drawing. (vi) Formal request for team support. (vii) Number of test points and their characteristics (i.e., some determined movement in the store); (viii) equipment and test site reservation. (ix) Formalization of the tests campaign (i.e., preparation of the document with the campaign details).

Figure 4: Store installed on the pylon. Tires for cushioning placed on the ground.
Figure 5: Example of a marker used in the store separation.
Figure 6: 3D representation of the camera in relation to the targets.

6. Preparation

At this stage, all teams and equipment must be positioned at the test site. The store must be positioned on the aircraft pylon. The tires to cushion the impact of the fall are also put in place (Figure 4). The adhesives are placed on the store, pylon, and aircraft (Figure 5). The cameras are configured and their positions determined (Figure 2).

The position of the reflectors and the position of each person in the test are defined. The synchronization of the cameras is tested. Some releases are performed in order to test the ejector, the trigger, the test terminology to be used, and the capture of the separation by the cameras. Finally, the scenario sketch validation and possible adjustments are carried out (Figure 7).

Figure 7: Topographic polygon (, and ) to measure the coordinates of the reference marks.

7. Geometry Determination

To determine the geometry of the topographic polygon (Figure 7), it is essential to establish the longitudinal and lateral levelings of the aircraft. The lateral leveling (Figure 8) was carried out by means of a Tokyo Theodolite TM20C [18]. The longitudinal leveling (Figure 9) was measured by a Nikon Total Station NPL-632 [19], which guided the height adjustment operation of the hydraulic jacks on which the aircraft was supported. The aircraft leveling reference marks were used for this procedure, which was performed as instructed by the Aircraft Maintenance Manual.

Figure 8: Lateral leveling .
Figure 9: Longitudinal leveling .

Starting with the definition of the local system origin point (-Figure 7), two supplementary reference points (i.e., and -Figure 7) near the aircraft are defined, to form a topographical polygon, where all required reference marks (i.e., -Figure 7) could be properly viewed by the total station.

With this information, the store geometry could be measured and the positioning and attitude of the cameras and the aircraft geometry, determined.

A key component in this process is the high-speed digital camera [2]. There are currently different models and brands of high-speed cameras, of different sizes and configurations. The choice of the camera is usually dictated by its location. This also applies to the lenses. The location of the camera on the aircraft in relation to the store being photographed/shot will probably dictate the choice of lens and its focal length. During the test flight, most specifically close to the store release event, the illumination condition and the resulting background contrast could change significantly (e.g., from direct sunlight exposure to dark shadow condition). Therefore, the usage of a fixed aperture iris is not recommended, because it is almost impossible to predict on the ground and after the flight the future light conditions in the moment of the store release.

Nowadays, the availability of several commercial off-the-shelf (COTS) lenses in the market, with electronic iris, which matches the required space constraints, simplifies the selection process for the optical set.

Regardless of the brand or size of the camera and lens selected, it is extremely important to realize that there are many errors that must be compensated for the video analysis. Almost everyone in the flight test profession recognizes that the combination of camera and lens should be calibrated. If the lenses are altered, the installation should be recalibrated. Again, most people know that the lenses distort the image in the video and that this distortion can be calibrated. However, there are other very important sources of camera errors that should be considered for video quality data. The possible displacement between the physical and optical centers that were manufactured in each camera is one of these errors.

Another item to evaluate is the camera’s frame rate [2]. There are many frame rates to choose from. However, 200 frames per second is the best recommended one for store separation analysis. A typical store will travel from its initial fixed position to the bottom of the camera view in 0.2 to 0.4 seconds (depending on the camera and the lens distance chosen). At 200 frames per second, this will produce 40 to 80 usable data frames.

Because most lenses have some distortion on their outer perimeter, the latter frames may be questionable. If the store is stable and of high density, most frames will be more than adequate for the analysis. If the store is light, relatively unstable, and moves quickly, most frames may be inadequate.

The Xavante AT-26 (i.e., EMBRAER 326GB, the Brazilian version of AERMACCHI MB-326, built by EMBRAER) aircraft of license plate 4467 and the Mikrotron Cube7 high-speed photographic cameras were used in the tests [20]. The cameras were configured for an acquisition rate of 400 frames per second (fps), mounted externally on the aircraft in a photographic pod to record movements during the store release. Two identical cameras were used. This model of the camera includes the synchronization feature, which is fundamental in this type of test since it allows the cameras to take photos at the same time. The cameras’ master-slave mechanism must be configured, with the master having 4 frames less than the slaves, as recommended by the manufacturer. The time interval between these frames is the time required for the cameras to synchronize. The printing of the Inter-Range Instrumentation Group (IRIG) time in each camera from the instrumentation system on board the aircraft is an additional mechanism to be used. Cameras are oriented to maximize the overlapping field of view. Because some stores are less than 1.5 m away from the camera and the measurement volumes are too large, 6 mm and 10 mm lenses are normally used. As a result, optical distortion must be calculated and corrected. For the tests carried out at the IPEV, the Kowa 6 mm C-Mount lens was used [21]. An inert store of approximately 130 kg was also used for the tests.

8. Camera Calibration Method

Several approaches are taken to correct the projective distortion, such as direct linear transformation, polynomial affine transformation, and photogrammetric transformation.

According to [22], the photogrammetric approach is the one with the smallest error, because it uses the camera’s own projection model; that is, it considers the principle of the collinearity equations. The disadvantage is that the roll, pitch, and yaw (, , and , respectively) attitude angles provided by the aircraft system follow the aeronautical definition and are completely different from those used in photogrammetric models.

The collinearity equations are the mathematical model on which spatial resection is based. They relate the three-dimensional coordinates of the object-space to the corresponding two-dimensional space image, taking into account the external orientation of the camera. The internal orientation is not considered, since it is defined in the camera calibration process. To establish this relationship, these equations consider the location of the camera and its orientation angles, as well as the focal length, not considering the camera’s internal geometric distortions, as illustrated in Figure 10. In order to arrive at the collinearity equations, a homographic transformation with a scaling factor must be performed, besides the reference change with translations and rotations.

Figure 10: Change of reference from the system of the object to that of the image. : the camera reference system; : the image reference system; : the object reference system; : the camera focal length (m); : the rotation matrix (3 × 3 matrix); : the translation matrix (3 × 1 matrix); : th reference point position, expressed in the image reference system (pixels); : th reference point position expressed in the object reference system (m).

9. Reference Systems

The camera’s three-dimensional reference system, Figure 11(a), originates from the CP. It comprises the axis in the direction of the larger side of the sensor, usually rectangular, oriented towards the right of the person looking behind the camera; the axis in the direction of the smaller side of the sensor with upward orientation; and the axis completing a clockwise rotation, being aligned with the optical axis [23]. The two-dimensional image reference system, Figure 11(b), originates from the top left pixel. It comprises axis in the direction of the columns, oriented towards the right, and axis in the direction of the lines, with downward orientation. Another reference system used, originates in the central pixel (), comprising the x-axis in the direction of the columns, oriented towards the right, and the -axis in the direction of the lines, with upward orientation, also shown in Figure 11(b).

Figure 11: Reference systems: (a) camera axes originating from the perspective center (CP) and (b) image axes originating from the upper left pixel and the center pixel.

The aircraft reference system, adopted by the navigation systems, Figure 12, originates from the center of gravity (CG). It comprises axis in the longitudinal direction, coinciding with the fuselage leveling reference line, with forward orientation. In addition, axis in the lateral direction, parallel to the plane of the wings, oriented towards the right wing and axis completing a clockwise rotation [24].

Figure 12: Aircraft reference system.

Aircraft navigation systems use as a reference a body-fixed coordinate system with origin at its center of gravity, as shown in Figure 13(b) [24]. The photogrammetric system is based on the reference system of the body-fixed camera axes , located in the perspective center, as shown in Figure 13(a) [23]. Thus, when the camera is fixed and aligned to the aircraft, the axes do not match, hence the need to adjust them.

Figure 13: (a) Photogrammetric system and (b) aeronautical system.

All the points defining the object can be represented in the two-dimensional space image by the principle of collinearity. For objects sufficiently distant from the camera, the image is formed in the focal plane, i.e., the points assume coordinate equals to the focal distance in the camera system. Thus, for a camera that is installed and aligned to the aircraft, as shown in Figure 13(b), the representation of the coordinates in the image system is derived from equation (1) and by equation (2) when the transformation by the aeronautical system is used, Figure 14(a). It is derived from equation (3), when the photogrammetric system is used, Figure 14(b). ENU represents the coordinates of a point on the Earth’s surface (East-North-Up), and the external orientation of the camera relative to the terrestrial ENU system is defined by its position . The parameters and are the proportionality factors:

Figure 14: Image system when using the (a) aeronautical system and when using the (b) photogrammetric system.

The main objective in the photogrammetric process is to establish a strict geometric relation between the image and the object, so that information can be extracted from the object using only the image [25]. However, a raw image contains geometric distortions due to the influence of several intrinsic and extrinsic factors on the sensor. Thus, in order to obtain reliable metric information from images in the various applications, it is essential that the optical assembly (camera-lenses) is calibrated [26].

10. Calibration

Calibration consists of the experimental determination of a series of parameters that describe the process of image formation in the camera, in accordance with an analytical model, which relates the known coordinates of a reference grid, also known as calibration field, with the corresponding part of the image [27] (Figure 15).

Figure 15: Calibration field, camera, and image reference systems.

In this context, the calibration can be understood as an optimization process, where the discrepancy between the observed image and the corresponding object coordinates is minimized with respect to the distortion model parameters. Given these parameters, the geometric transformation function of coordinates of the distorted image to the corrected image can be known and then the image can be resampled.

There are a number of field geometries and a number of calibration methodologies [28, 29].

The geometric distortion of the images recorded by a sensor is affected to a greater or lesser degree by several factors, depending on the sensor architecture and the platform in which it is inserted. There are geometric aberrations that affect the quality of the image but not the position of the objects in the image, namely, spherical aberrations, astigmatism, and field curvature [30].

The factors are subdivided into internal and external. The former is related to the sensor architecture, and its distortions are corrected in the process of internal orientation of the image, in which the so-called intrinsic parameters, determined in the calibration process, are used. The external ones are related to the medium in which the sensor is immersed and to its position and orientation in relation to the object of interest, the distortions being corrected in the process of external orientation of the image, in which the so-called extrinsic parameters are used [31].

11. Radial Distortion

This occurs due to the refraction suffered by the light rays when passing through one or more lenses of the optical assembly until it reaches the film or sensor to form the image [30]. The symmetric radial distortion is modeled by means of a polynomial function with as many coefficients as required for accuracy [32]. Generally, only two coefficients are used; however, the more comprehensive models use four. The radial error (pixel) is directly expressed as a function of the normalized radial distance “” to the principal axis (center of symmetry) and the calibration parameters , according to equation (4). The terms and (pixels) are the undistorted coordinates in the and directions, respectively, relative to the main axis, normalized by the focal length, that is, with “focal length” units:

The most commonly used model is the one derived from the polynomial function (eq. (4)), which expresses the error already decomposed in the and components of the coordinates [33]. The coordinates with image distortion are calculated as a function of the undistorted coordinates and the intrinsic parameters , as shown in

Substituting equation (4) into (5), equations (6) and (7) are obtained:

12. Tangential Distortion

Tangent or off-center distortion is caused by misalignment of the optical axes of the lens relative to the plane of the sensor. The tangential distortion was approached by [34], being modeled by means of tangential and radial components, as a function of the calibration parameters , the radial distance “” (pixels) and the direction (°) of the point of interest in the image in relation to the direction of the maximum tangent distortion, according to [25]

The most commonly used model is the Brown-Conrady model which expresses the errors in the and components, respectively, as a function of the coordinates of the point considered and the tangent distortion parameters , according to [35]

In most cases, high accuracy is not required, and the higher order terms are neglected, making the model linear. Thus, the simplified Brown-Conrady model is the most commonly used. This expresses the distorted coordinates as a function of the undistorted coordinates , as shown in

13. Complete Model

A camera calibration model was developed considering the pinhole model added by the radial [36] and tangential distortions [35], which occur when the light rays pass through the lens before reaching the sensor, i.e., the distortion parameters act on the coordinates that have no influence on the intrinsic parameters of the matrix , defined by with where is the intrinsic matrix; and are the affinity terms; is the skew parameter; and are the principal point lags, in pixels; and are the column and row of the central pixel ; and are the number of columns and rows in the image.

Therefore, the construction of the model begins with the isolated representation of the radial and tangent distortion effects, as shown in equation (14). In order to have a better understanding: with where are the axes of the reference system of the calibration field, are the coordinates of the camera CP in the field reference system, is the homographic scale factor, is the reference change matrix, is the rotation matrix, and is the translation matrix.

Figure 16 illustrates the definition of the reference systems and some camera parameters.

Figure 16: Reference systems and parameters in the image representation of a camera. and : the coordinates in the image system with origin in the center pixel, in pixels; and : the column and center pixel lines; and : the coordinates in the image system originating from the main point, in pixels; PP: the principal point of collimation, point of intersection of the optical axis with the sensor; and : the axes of the column-row reference system of the image, originating from the upper left pixel; and : the principal point lags, in pixels, measured in the and directions relative to the upper left pixel; and : the pixel dimensions in the and directions; and : the dimensions of the sensor in the and directions; and : the number of columns and rows in the image.

By developing equation (14), it can be seen from equations (16) and (17) that the coordinates can be understood to be normalized by the focal length of the respective axis; that is, their units are focal lengths:

The combination of radial and tangential distortions results in equations (18) and (19), which relate distorted normalized coordinates to normalized undistorted ones , considering four radial distortion parameters and four tangential distortion parameters :

Thus, the matrix can be applied in the normalized coordinates to print the affinity, nonorthogonality, and principal point lag effects, as shown in [37] and to denormalize the coordinates , in order to obtain the distorted coordinates in pixels, whose origin is the central pixel, according to

To represent the model in the row-column coordinate system, the related transformation given by equation (21) must be applied, where and are, respectively, the column and row coordinates of the distorted image:

As a result, equation (22) represents the complete analytical model of all internal camera distortions:

The geometric correction of the images was carried out by means of the camera parameters; some were extracted from the manual and others calculated. Figure 17 shows the definition of each one of them, and the following equations express how they were obtained.

Figure 17: Geometric parameters of the pinhole camera model. : focal length; PP: main point of collimation, intercept of the optical axis with the plane of the image; CP: perspective center, point where the light beams leave the ground and reach the image; : sensor pixel size; : size of the sensor frame; : frame size of the projected image on the ground; GSD (ground sample distance): pixel size projected on the ground; : size of the object on the ground; : dimension in the image of the object with real dimension ; : projected distance from the CP to the ground, corresponding to the flight height; FOV (field of view): the total aperture angle representing the field of view covered by the sensor; IFOV (instantaneous field of view): the aperture angle representing the field of view of a pixel.

The mathematical relations used to obtain the geometric parameters can be extracted from the geometry of the figure. These refer to the simplified pinhole camera model, free of internal distortion. In order to obtain the coordinates of the pixels in the system or in the system from the coordinates in the row-column system, the affine transformation given by equations (23) and (24), respectively, must be used:

14. Calibration Field

As has been seen, the calibration has the objective of identifying the intrinsic parameters of the cameras’ geometric distortion, in order to correct the position of the point pixels extracted from the spatial resection images.

There are several calibration methods that essentially differ in the type and geometry of the field, the number of required photographs, the camera positioning method in the field, the quantity and arrangement of distortion parameters in the mathematical model, the model adjustment methodology for identification of parameters, among others. Comparative studies of the various calibration methods have already been carried out [28, 29, 38].

Some studies show the calibration and the photographic survey of the object features being done simultaneously, with the purpose of composing a mosaic. This is called on the job calibration [39, 40].

Other studies employ the calibration method that uses as a field a coplanar target with equally spaced markers, also known as the chessboard [4143]. This method requires several convergent poses to compensate for the nonconditioned field (non-three-dimensional), but it has the advantage of not requiring a fixed location for the calibration or knowledge of the markers’ coordinates. This method is implemented in MATLAB [44] and OpenCV [45].

A variation of this method employs as a field a target with markers arranged in two orthogonal planes, pentagon-like shaped [36]. Another uses a dual system of converging cameras [46]. The arrangement of markers on external façades of buildings is also a widely used technique, especially for high focal length cameras [43, 46].

Calibrations that require greater accuracy are usually performed in laboratory or controlled environments, where the coordinates of the markers are precisely known [43, 47]. In this work, an algorithm that uses the approach quoted in this paragraph was used [37].

To this end, captures at a location of known reference coordinates are required. A geometric calibration field was set up at the IPEV, in a room within the X-30 hangar, consisting of a three-dimensional space with 134 markers, as indicated in Figure 18. The markers are cross-shaped in black and white (floor), blue and white (right wall), red and white (left wall), and green and white (ceiling) (Figure 18). The colors are for easy identification and topographic survey. The markers were constructed with 5 cm pieces of aluminum angle and vinyl adhesives, each containing the cross and its identification.

Figure 18: IPEV’s geometric calibration field. Detail for identified markers and positions marked on the floor for camera positioning.

The markers were arranged in layers of various depths in order to break the linear dependence that occurs between some parameters of the distortion model. In addition, the spacing between the markers was projected, so that they were homogeneously distributed in the image and covering the entire photo frame, either horizontally or vertically. To achieve this, the image of a generic camera in the field was simulated, and the row-column coordinates for the markers were defined in the table, as shown in Figure 18. Thus, the 3D coordinates where the field markers would be fixed were obtained by the camera’s pinhole projection model.

The markers were positioned on the walls, ceiling, and floor, so that the image appears homogeneously distributed and filled in both the vertical and horizontal orientations. For this, the position of each marker had to be simulated (Figure 19). The positioning was performed by means of a leveled base laser in order to project a vertical or horizontal line on the surface (wall, floor, or ceiling). A rigid ruler was placed on this line to measure the previously determined spacings. Small deviations (millimeters) in the positioning do not significantly affect the homogeneous distribution in the frame.

Figure 19: Simulation of the positioning of each marker in the calibration field.

In the calibration process, the exact 3D coordinates of the field markers must be known. Thus, a total topographic station [19] was used to determine the three-dimensional Cartesian coordinates of the 134 markers.

15. Calibration of Cameras

The calibration methodology developed in this field provides information on the external orientation of the camera, that is, its position in relation to the markers and its orientation angles. This favors the convergence in the adjustment of the distortion model and the breakdown of the linear dependence between some parameters.

Photographic shots were taken with the cameras used in the static ejection tests. Only one image per camera was needed. Subsequently, the row-column coordinates of the markers in each image were captured, associating them to the corresponding field coordinates obtained with the total station, in addition to generating an image indicating the location of the captured markers (Figure 20). Since all the points extracted from the image were used, it was not necessary to verify the minimum number of points needed to guarantee the desired calibration accuracy.

Figure 20: Example of capture performed in the calibration field with the pod front camera used in the static ejection test.

Having the three-dimensional Cartesian coordinates of the calibration field markers and the corresponding row-column coordinates in the captured image, the calibration is performed by solving a nonlinear system of “” equations with 19 unknowns, where “” is the number of markers captured and the unknowns are the geometric distortion parameters together with the external orientation parameters . For the resolution, an iterative matrix least squares adjustment is performed, with initial estimation of the parameters and definition of maximum and minimum permitted limits. A polynomial model of six coefficients shown in Figure 21, with a good adhesion being obtained, adjusted the radial distortion.

Figure 21: Radial distortion curve obtained in the IPEV field, considering the pod rear camera.

From the parameters identified in the complete distortion model, reprojection of the markers of the calibration field was performed in the two-dimensional image system and compared to the real coordinates obtained in the capture. This allowed a qualitative assessment of the model adherence, as shown in Figure 22.

Figure 22: Reprojection of the markers of the calibration field according to the model developed with the parameters identified for the rear camera.

For each mark, the discrepancy of the reprojection coordinate, provided by the identified model, was evaluated in relation to the corresponding real coordinate captured from the image.

The result is presented in Figure 23, showing a satisfactory mean square error of less than one pixel for the rear camera. For the front camera, the mean square error is ±1.33 pixels @1σ.

Figure 23: Analysis of the reprojection errors of the markers of the calibration field for the rear camera.

With the parameters identified in the calibration process, the images generated by the camera containing geometric distortions could be redisplayed pixel by pixel in order to represent the image that would be generated by a pinhole camera, free of distortions. The nearest neighbour interpolation method was used, maintaining the original size of the pixel.

16. Performance of Test Points

In order to carry out the tests, all equipment must be in place (i.e., tires, store placed on the pylon, computers mounted and connected, and cameras mounted, connected, configured, and synchronized). There is a computer (notebook) configured for each camera. This computer has standard market settings. As soon as the store is ready to be released, the cameras are triggered for recording. Measurement of luminance in the test is also performed using the Minipa Digital Luminometer DMM-1011 digital meter [48]. In general, the luminance measured in the center of mass of the store was 7200 lux. Thereafter, the test engineer to determine the start of the test point (i.e., initial frame identification in the cameras) activates a trigger, and 0.5 seconds later, the store is released. Once the store made contact with the tires, the cameras were paused by the operators of each camera. The videos were then downloaded from the cameras and reviewed to determine their validity. If one of the videos was unsuitable (e.g., not synchronized, loss of frames, capture catch, or any other issue), they were discarded. Otherwise, the videos were renamed and stored. A standard nomenclature was used to facilitate the identification of the videos. Basically, they were named with the default ORIGIN_DATE_TEST POINT NUMBER, with “origin” being the camera number; “date” the day, month, and year of the test; and “test point number” a sequence number starting at 01. The videos were stored on the local computer.

Subsequently, the next test point is set up re-placing the store in the pylon.

A total of 10 valid test points were performed in a period of 2 hours. At the end of the test day, the videos were stored in the IPEV data server.

17. 2D Analysis

Once the test points for the day were performed, the IPEV photogrammetry team started the information processing task.

Video files are transferred to the local computer directory for processing. The computer used for processing the results is a notebook with a standard market configuration. The first step is to create the images (frames) from the videos. An algorithm that reads each video and creates a folder called “images” was developed. The images of the test point are placed in this folder. The numbering of the images started at “10,000” to facilitate their ordering.

A two-second video has an average size of 2.8 GB, with each image having 3.5 MB. The rear camera has a resolution of 1184 × 1040 pixels. The front camera of the pod has a resolution of 1248 × 968 pixels.

In general, each video has 0.5 second of images that precede the beginning of the separation and 0.5 second of images after the contact of the store with the tires. These images are not needed for the processing. For this reason, another algorithm was developed to automatically recognize unnecessary images to be discarded. An example of a test point indicating the beginning of the store separation is shown in Figure 24.

Figure 24: Indication of the start of store separation.

The cameras provide RGB-standard images. Therefore, they had to be transformed into greyscale. After that, the equalization of the image histogram is performed in order to improve its brightness and contrast ratio. Figure 25(a) shows the example of an original image and Figure 25(b) shows the image after equalization of the histogram.

Figure 25: (a) Original image and (b) image after histogram equalization.

The next step is to define the center of each marker (considering the store, pylon, and wing markers) so that they can be traced during the separation. For this, the region of interest for processing is limited to the aircraft’s store and the pylon. This is possible because the position of the store and the pylon relative to the camera will not change considerably between the separations. The position of each marker has been previously determined and measured by means of a total station [19]. Then, an identification algorithm scans the image, looking for corners. In Figure 26, it can be observed that 19 markers were found (red dots) in the store, 7 markers in the pylon, and 13 markers in the wing, by the rear camera.

Figure 26: Image with markers identified in store, pylon, and aircraft.

The center of the highlighted markers (red color) can be observed in Figure 27, as well as a bound box on each mark that serves only as visual reference to indicate that the marking was identified in the frame.

Figure 27: Image with identified markers and region of interest (green bound box) highlighted in each reference, used for the next frame in order to maintain tracking of markers.

In Figure 28(a), it is possible to observe the marker in an original image captured by the camera with its respective histogram. Figure 28(b) shows the result of the histogram equalization.

Figure 28: (a) Example of a marker and its histogram and the (b) marker after equalization of the histogram.

In Figure 29, the aircraft, pylon, and store markers that were tracked during the test point can be observed.

Figure 29: 2D separation trajectory measured by the rear camera.

Figure 30 shows the trajectory of the markers on the last image of a test point. This allows visual validation of the trajectory of the markers. It can be perceived that some markers lost track during the test point.

Figure 30: Store trajectory measured by the front camera.

For a 3D solution, two or more cameras are used in order to quantify the error. In the case of the pod used, only two cameras are possible. Therefore, the 3D analysis needs to be performed, after running the 2D analysis algorithm.

18. 3D Analysis

Given the 2D frames (images) of each camera, the problem of determining the position of each marker is solved using least squares. Each marker is defined by the intersection of two lines, generated by the line of sight of each camera. Each straight line in space is represented by two equations in , and , so that with two lines (4 equations), an overdetermined system is obtained, which is solved by least squares (4 equations and 3 unknowns). With a third camera, two more equations are obtained, and by means of a determinant, any intersection between two lines in space could quickly be determined. This, however, will hardly occur in the experimental measurements. However, one solution is by least squares resolution (a point that does not belong to any line but is closest to both, simultaneously).

Should the coordinates of a number of points measured in two different Cartesian coordinate systems be given (Figure 31) [49], the photogrammetric problem of recovering the transformation between the two systems from these measurements is called absolute orientation [50]. This problem is common in several contexts.

Figure 31: The coordinates of a number of points is measured in two different coordinate systems. The transformation between the two systems is to be found. Adapted from [49].

The transformation between two Cartesian coordinate systems can be thought of as the result of a rigid movement of the body and can thus be decomposed into a rotation and a translation. In stereo photogrammetry however, the scale may not be known. There are obviously three degrees of freedom for translation. Rotation has another three (direction of the axis in which the rotation occurs plus the angle of rotation around this axis). Escalation adds another degree of freedom. Three known points in both coordinate systems provide nine constraints (three coordinates each), more than enough to allow determination of the seven unknowns (3 translations, 3 rotations, and 1 scale).

Discarding two of the constraints, seven equations with seven unknowns can be developed to allow the determination of the parameters.

The algorithm implemented by [49] uses all available information to obtain the best possible estimate (in a least square sense). In addition, it is preferable to use it for center point estimation, instead of relying on single point measurements. This algorithm is used in this research work.

The main purpose of the 3D analysis is to obtain data from 6DoF. An example of a test point considering is shown in Figure 32.

Figure 32: Data from a test point.

Figure 33 shows an example of a test point, considering the angles .

Figure 33: Data in roll, pitch, and yaw of a test point.

Taking into consideration the image size from reference points, it is possible to get the pixel size.

For each marker coordinate , the error was measured in each video frame, considering the two cameras. Considering the errors of all markers, for the front camera, the mean square error obtained at is ±0.38 pixels @1σ and ±0.83 pixels @1σ at . For the rear camera, the mean square error obtained at is ±0.1 pixels @1σ and ±0.78 pixels @1σ in .

The maximum error in the front camera was 3.5 pixels at and 3 pixels at ; for the rear camera, the maximum error was 0.4 pixels at and of 3.9 pixels at . Therefore, the maximum error obtained is approximately 8.00 mm, a value considered satisfactory for this solution.

Figure 34 shows an example of 3D tracking of a test point with five markers.

Figure 34: 3D chart for analysis of a test point.

The frame-by-frame position error determination is shown in Figure 35.

Figure 35: Position error of each marker, frame by frame, considering a test point with only 5 best markers.

All the algorithms were developed in MATLAB.

19. Final Considerations

The objective of this paper is to demonstrate the implementation of the photogrammetric solution for the analysis of static ejection tests.

The solution presented is promising. The errors obtained in both the calibration method and the software are better than ±8.00 mm @1σ, which was considered acceptable for this type of test.

In relation to the calibration, the analytical model incorporates the pinhole camera and intrinsic parameters. This generated a square mean error of less than one pixel. For the 3D analysis software with 6DoF, developed using photogrammetry and absolute orientation, the frame-by-frame position error determined ranges from 0.02 mm to 8.00 mm, depending on the position of the marker chosen for processing. The experiments were performed at the IPEV, and the results obtained are considered satisfactory.

This solution is part of an IPEV development program that aims to perform the real-time analysis of store separation flight tests using photogrammetry.

Suggestions for future work are (i)carry out new static ejection tests with possible variations considering 6DoF of the store, once the aircraft has released it(ii)evaluate the performance of the software in relation to the variation of luminosity in the test environment(iii)continue the development of the software in order to perform the near real-time analysis

Data Availability

I agree to provide the underlying data in this manuscript if requested. The files will be available in Google Drive from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflict of interest regarding the publication of this paper.

References

  1. S. R. Perillo and D. J. Atkins, “Challenges and emerging trends in store separation engineering: an Air Force SEEK EAGLE Office perspective,” in 47th AIAA Aerospace Sciences Meeting including The New Horizons Forum and Aerospace Exposition, Orlando, Florida, 2009. View at Publisher · View at Google Scholar
  2. R. J. Arnold and C. S. Epstein, AGARD Flight Test Techniques Series on Store Separation Flight Testing. Volume 5, NATO, 1986.
  3. A. Cenko, Store Separation Overview, AIWS LLC, 2016.
  4. I. Persson and A. Lindberg, “Transonic store separation studies on the SAAB Gripen aircraft using computational aerodynamics,” in Proceedings of the 26th ICAS Conference, Paper, pp. 14–19, Anchorage, AK, USA, September 2008, http://www.icas.org/ICAS_ARCHIVE/ICAS2008/ABSTRACTS/040.HTM.
  5. A. Cenko, “Store separation lessons learned during the last 30 years NAVAIR, Patuxent River, MD 20670,” 27th International Congress of the Aeronautical Sciences. ICAS2010 Proceedings, 2010, https://apps.dtic.mil/dtic/tr/fulltext/u2/a538155.pdf. View at Google Scholar
  6. E. Forsman, S. Getson, D. Schug, and G. Urtz, Improved Analysis Techniques for More Efficient Weapon Separation Testing, NAVAIR, 2008.
  7. E. Hallberg and W. Godiksen, “MATLAB based telemetry integration utility for store separation analysis,” in 2007 U.S. Air Force T&E Days, Destin, Florida, February 2007. View at Publisher · View at Google Scholar
  8. E. Forsman and D. Schug, Estimating Store 6DoF Trajectories Using Sensor Fusion between Photogrammetry and 6DoF Telemetry, NAVAIR Public Release, 2011.
  9. L. E. G. Vasconcelos, N. P. O. Leite, A. K. Yoshimi, L. Roberto, and C. M. A. Lopes, “Method and software to perform pitch drop,” in ettc2018 - European Test and Telemetry Conference, pp. 228–237, Nürnberg, Germany, June 2018.
  10. H.-K. Cho, C.-H. Kang, Y.-I. Jang, S.-H. Lee, and K.-Y. Kim, “Store separation analysis of a fighter aircraft’s external fuel tank,” International Journal of Aeronautical and Space Sciences, vol. 11, no. 4, pp. 345–350, 2010. View at Publisher · View at Google Scholar
  11. J. W. Williams, R. F. Stancil, and A. E. Forsman, “Photogrammetrics: methods and applications for aviation test and evaluation,” in 33th Annual International Symposium, SFTE, 2002. View at Google Scholar
  12. MIL-HDBK-1763, Aircraft/Stores Compatibility: Systems Engineering Data Requirements and Test Procedures, Department of Defense USA, 1998.
  13. E. S. Getson, “Telemetry solutions for weapon separation testing,” in 33th Annual International Symposium, SFTE, 2002. View at Google Scholar
  14. W. Liu, X. Ma, Z. Jia et al., “An experimental system for release simulation of internal stores in a supersonic wind tunnel,” Chinese Journal of Aeronautics, vol. 30, no. 1, pp. 186–195, 2017. View at Publisher · View at Google Scholar · View at Scopus
  15. W. Liu, X. Ma, X. Li et al., “High-precision pose measurement method in wind tunnels based on laser-aided vision technology,” Chinese Journal of Aeronautics, vol. 28, no. 4, pp. 1121–1130, 2015. View at Publisher · View at Google Scholar · View at Scopus
  16. X. Ma, W. Liu, L. Chen, X. Li, Z. Jia, and Z. Shang, “Simulative technology for auxiliary fuel tank separation in a wind tunnel,” Chinese Journal of Aeronautics, vol. 29, no. 3, pp. 608–616, 2016. View at Publisher · View at Google Scholar · View at Scopus
  17. NATO (North Atlantic Treaty Organization), “Aircraft/Stores Compatibility, Integration and Separation Testing,” STO AGARDograph 300. Flight Test Technique Series – Volume 29, AG-300-V29, 2014, https://www.sto.nato.int/publications/STO%20Technical%20Reports/STO-AG-300-V29/$$AG-300-V29-ALL.pdf. View at Google Scholar
  18. Lietz, “Double Center Theodolite TM-20C,” May 2018, https://cn.sokkia.com/sites/default/files/sc_files/downloads/tm20c_0.pdf.
  19. Nikon, “Total Station DTM-322 - Instruction Manual,” April 2009, http://www.mcesurvey.com/files/Nikon_DTM-322_Total_Station_Manual.pdf.
  20. Mikrotron, “Mikrotron Cube7-Datasheet,” May 2018, https://mikrotron.de/fileadmin/Data_Sheets/High-Speed_Recording_Cameras/mikrotron_motionblitz_eosens_cube7_datasheet.pdf.
  21. Kowa, “Lenses 6 mm Kowa – datasheet,” May 2018, http://www.rmaelectronics.com/content/Kowa-Lenses/LM6HC.pdf.
  22. S. A. Lima and J. L. Brito, “Estratégias para Retificação de Imagens Digitais,” in Em: COBRAC 2006 - Congresso Brasileiro de Cadastro Técnico Multifinalitário, UFSC, Florianopolis, 2006, no. 90, pp. 1–14, Anais, Artigos, 2006. View at Google Scholar
  23. P. R. Wolf, Elements of Photogrammetry, McGraw-Hill, Singapore, 1985.
  24. J. A. Farrell and M. Barth, “Dynamics of flight: stability and control,” Tech. Rep., Wiley, Toronto, 1996. View at Google Scholar
  25. E. M. Mikhail, J. S. Bethel, and J. Chris McGlone, Introduction to Modern Photogrammetry, Wiley, Hoboken, 2001.
  26. A. Gruen and T. S. Huang, Calibration and Orientation of Cameras in Computer Vision, Springer, Berlin, 2001.
  27. P. Swapna, N. Krouglicof, and R. Gosine, “The question of accuracy with geometric camera calibration,” in 2009 Canadian Conference on Electrical and Computer Engineering, pp. 541–546, St. John’s, Newfoundland, May 2009.
  28. T. A. Clarke and J. G. Fryer, “The development of camera calibration methods and models,” The Photogrammetric Record, vol. 16, no. 91, pp. 51–66, 1998. View at Publisher · View at Google Scholar · View at Scopus
  29. F. Remondino and C. Fraser, Digital Camera Calibration Methods - Considerations and Comparisons, ISPRS Commission V Symposium Image Engineering and Vision Metrology, Dresden, Germany, 2006.
  30. J. L. N. S. Brito and L. C. T. F. Coelho, Fotogrametria Digital, Ed UERJ, Rio de Janeiro, 2007.
  31. T. Schenk, Digital Photogrammetry Ohio, TerraScience, USA, 1999.
  32. D. C. Merchant, Analytical Photogrammetry: Theory and Practice Notes Revised from Earlier Edition Printed in 1973, The Ohio State University, 1979.
  33. J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1066–1077, 2000. View at Publisher · View at Google Scholar · View at Scopus
  34. A. E. Conrady, “Decentred lens-systems,” Monthly Notices of the Royal Astronomical Society, vol. 79, no. 5, pp. 384–390, 1919. View at Publisher · View at Google Scholar
  35. D. C. Brown, “Decentering distortion and the definitive calibration of metric cameras,” Tech. Rep., Annual Convention of the American Society of Photogrammetry, Washington DC, 1965. View at Google Scholar
  36. J. Heikkila, “Camera calibration toolbox for MATLAB,” May 2018. http://www.vision.caltech.edu/bouguetj/calib_doc.
  37. L. Roberto, “Acurácia do posicionamento e da orientação espacial de veículos aéreos a partir de imagens de câmeras de pequeno formato embarcadas,” in Dissetação de Mestrado, São José dos Campos: INPE, 2017. View at Google Scholar
  38. J. Hieronymus, “Comparison of methods for geometric camera calibration,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXIX-B5, pp. 595–599, 2012. View at Publisher · View at Google Scholar
  39. L. Barazzetti, L. Mussio, F. Remondino, and M. Scaioni, “Targetless camera calibration,” ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XXXVIII-5/W16, pp. 335–342, 2011. View at Publisher · View at Google Scholar
  40. P. Debiasi, F. Hainosz, and E. A. Mitishita, “Calibração em serviço de câmara digital de baixo custo com o uso de pontos de apoio altimétrico,” Boletim de Ciências Geodésicas, vol. 18, no. 2, pp. 225–241, 2012. View at Publisher · View at Google Scholar
  41. Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, 2000. View at Publisher · View at Google Scholar · View at Scopus
  42. Z. Zhang and K. H. Wong, “A novel geometric approach for camera calibration,” in 2014 IEEE International Conference on Image Processing (ICIP), pp. 5806–5810, Paris, France, 2014.
  43. N. Borlin and P. Grussenmeyer, “Camera calibration using the damped bundle adjustment toolbox,” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. II-5, pp. 89–96, 2014. View at Publisher · View at Google Scholar · View at Scopus
  44. MATLAB, “Camera calibration toolbox tutorial,” in MATLAB R2014a Ver 8.3.0.532. [S.l.], The Mathworks Inc, 2014. View at Google Scholar
  45. OpenCV, “Camera calibration with OpenCV: OpenCV 2.4.13.0 documentation,” May 2018, http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html.
  46. W. S. Bazan, A. M. G. Tommaselli, M. Galo, and R. S. Ruy, “Calibração de um sistema dual de câmaras digitais convergentes,” in Simpósio Brasileiro de Geomática e Colóquio Brasileiro de Ciências Geodésicas, 2. e 5, pp. 726–734, Anais... Presidente Prudente: Universidade Estadual Paulista (UNESP), Presidente Prudente, Brasil, 2007. View at Google Scholar
  47. R. Ladstadter and M. Gruber, “Geometric aspects concerning the photogrammetric workflow of the digital aerial camera UltraCamX,” in Proceedings of the 21st ISPRS Congress Beijing 2008, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Beijing, China, 2008.
  48. Minipa, “Luxímetro Digital Minipa MLM-1011,” May 2018, https://www.instrusul.com.br/produto/luximetro-digital-minipa-mlm-1011-com-certificado-de-calibracao-18900#productDescription.
  49. B. K. P. Horn, “Closed-form solution of absolute orientation using unit quaternions,” Journal of the Optical Society of America A, vol. 4, no. 4, p. 629, 1987. View at Publisher · View at Google Scholar · View at Scopus
  50. C. C. Slama, C. Theurer, and S. W. Henrikson, Manual of Photogrammetry, American Society of Photogrammetry, Falls Church, Va, 1980.