Journal of Advanced Transportation

Journal of Advanced Transportation / 2017 / Article

Research Article | Open Access

Volume 2017 |Article ID 8324301 | https://doi.org/10.1155/2017/8324301

Yuchuan Du, Cong Zhao, Feng Li, Xuefeng Yang, "An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video", Journal of Advanced Transportation, vol. 2017, Article ID 8324301, 12 pages, 2017. https://doi.org/10.1155/2017/8324301

An Open Data Platform for Traffic Parameters Measurement via Multirotor Unmanned Aerial Vehicles Video

Academic Editor: William H. K. Lam
Received12 Aug 2016
Revised29 Nov 2016
Accepted28 Dec 2016
Published29 Jan 2017

Abstract

Multirotor unmanned aerial vehicle video observation can obtain accurate information about traffic flow of large areas over extended times. This paper aims to construct an open data test platform for updated traffic data accumulation and traffic simulation model verification by analyzing real time aerial video. Common calibration boards were used to calibrate internal camera parameters and image distortion correction was performed using a high-precision distortion model. To solve external parameters calibration problems, an existing algorithm was improved by adding two sets of orthogonal equations, achieving higher accuracy with only four calibrated points. A simplified algorithm is proposed to calibrate cameras by calculating the relationship between pixel and true length under the camera optical axis perpendicular to road conditions. Aerial video (160 min) from the Shanghai inner ring expressway was collected and real time traffic parameter values were obtained from analyzing and processing the aerial visual data containing spatial, time, velocity, and acceleration data. The results verify that the proposed platform provides a reasonable and objective approach to traffic simulation model verification and improvement. The proposed data platform also offers significant advantages over conventional methods that use historical and outdated data to run poorly calibrated traffic simulation models.

1. Introduction

Traffic congestion is frequently encountered on ground roads and urban expressways [1]. Many advanced traffic control strategies and complex traffic behaviors have been proposed using traffic simulation models to minimize congestion and enlarge/modify traffic networks [2, 3]. For example, the cell transmission model (CTM) can determine optimal on-ramp metering rates, emergency dissipation response traffic density estimation, and congestion mode estimation for a freeway [4, 5]. Cellular automata (CA) is a flexible and powerful visualization tool used in urban growth simulations [68] and has many appealing features, including simulating bottom-up dynamics and capturing self-organizing processes [7, 8]. However, these optimization methods require reasonably accurate estimates for the relevant parameters [9]. Almost all traffic simulation models are confronted with the same difficulties, for example, lack of continuous and detailed real time data and lack of frequent updates based on reliable timely data, leading to inaccurate and improperly calibrated traffic simulation models with questionable results [10, 11]. Thus, mismatches and discrepancies between predicted traffic situations (simulation model output) and actual traffic patterns occur. Since each traffic network component (segment) has distinct characteristics, one cannot use the same set of calibrated data and parameter values across all network components [11, 12]. This is only possible if the collected data is frequently updated and reliable, and data sources are readily available.

Classical methods to accumulate traffic data and estimate traffic parameters depend upon induction loops and other on-site instruments [13]. However, they do not provide a comprehensive picture of two-dimensional (2D) traffic situations, with the primary drawback being their limitation for measuring important traffic parameters and accurately assessing traffic conditions [14]. Detection through unmanned aerial vehicle (UAV) video image processing is among the most attractive alternative new technologies, offering opportunities to perform substantially more complex tasks and provide more precise, accurate, and widespread traffic parameters than other sensors [15].

Recent advances in UAV technology and utilization for traffic surveillance have allowed traffic planners to consider the eye in the sky approach to traffic monitoring, using detailed real time data collection and data processing to evaluate traffic patterns and determine origin destination flows and emergency response [1619]. Several research teams are focused on UAV applications for transportation engineering. German Aerospace Center, University of California at Berkeley, and Western Michigan University have proposed several methods to investigate the most effective methods of transmitting and analyzing UAV acquired traffic data [2022]. The University of South Florida, University of Washington, and Linkopings University have focused on the type of data and information that should be collected and extracted to design traffic simulation models and evaluate traffic networks [16, 23, 24]. UAVs are preferred over traditional technologies because of their mobility and lower operating costs relative to manned systems [25].

Most aerial cameras are not perfect and tend to include a variety of distortions and aberrations. To guarantee aerial data accuracy, improvements in airborne camera calibration and image distortion correction algorithms are essential. Abdel-Aziz and Karara proposed a method based on direct linear transformation into object space coordinates for close range photogrammetry. However, their method does not consider nonlinear distortion [26]. Zhang used a precise lattice template to optimize internal camera parameters using template physical coordinates and image coordinates, resulting in a method that is simple, accurate, and flexible [27]. Heyden and Astrom showed that automatic calibration of variable parameters is possible under certain conditions [28, 29]. The advantage is that the algorithm may achieve high accuracy, provided that the estimation model is good, and correct convergence has been achieved. However, since the algorithm is iterative, the procedure may settle on a bad solution unless a good initial guess is available. Espuny proposed an automatic linear calibration method for planar movement [30]. While automatic calibration is more flexible compared to traditional calibration methods, the accuracy is insufficient. This paper presents a simple and effective calibration method of intrinsic and external parameters and a high distortion model for lens cameras.

Simulation models require calibration based on unique features and traffic patterns associated with specific networks. Requirements for collecting reliable and robust traffic state information have become increasingly urgent during the past decade. Multirotor UAVs are different from fixed ring UAVs, with the ability to hover at low or high altitudes and focus on data collection from a specific link or intersection. This paper proposes an open data test platform for updated traffic data accumulation and traffic simulation model verification and improvement (in terms of variable parameters values) by analyzing real time collected aerial video, shown diagrammatically in Figure 1. To offer a reliable way of collecting spatial-temporal data, camera parameter calibration methods and image distortion correction problems are explored under aerial shooting conditions. A simplified algorithm for camera calibration is proposed based on calculating the relationship between pixel and true length with the camera optical axis perpendicular to the road conditions. The most appropriate shoot altitudes were also determined. Large quantities of aerial videos from the Shanghai inner ring expressway were collected and traffic simulation models calibrated using the collected data and traffic parameters.

2. Purpose of Open Data Platform

The objective of our open data platform is to provide real-world data sets with corresponding data descriptions, which will be used as a resource to assist in the verification, validation, calibration, and development of existing traffic simulation models, such as car following, lane changing, gap acceptance, and queue discharge. Traffic simulation models have complicated data input requirements and many model parameters [31]. For example, to build a microscopic simulation model for certain network, two types of data are required. The first type is the basic input data used for network coding of the simulation model. The second type is the observation data employed for the calibration of model parameters and the simulation model. The data sets of our platform contain the following two types.(i) Basic Input Data. Basic input data include data of network geometry, transportation analysis zones, travel demand, and traffic detection systems.(ii) Data for Model Development, Improvement, and Validation. After extensive research into the traffic data for potential use in microsimulation model development, improvement, and validation, this platform plans that a data collection program should include the following.First, a freeway traffic data collection program includes both vehicle trajectory data and wide-area detector data for operational and tactical algorithm research.Second, an arterial traffic data collection program includes both vehicle trajectory data and wide-area detector data for operational and tactical algorithm research.Finally, a regional traffic data collection program includes both instrumented vehicle data and wide-area detector data for strategic algorithm research.

Despite the importance of UAV video traffic data for research traffic flow theory, these data proved to be massively affected by measurement errors in the vehicle’s spatial coordinates. If not properly accounted for, these errors would make these data sets unusable for any study on traffic theory. The accuracy of the computation of the true location of the vehicle is mainly influenced by the error of the UAV position (the maximum error is estimated below 4-5 meters for distances to the object of 80 meters), the springs in the camera platform suspension, and the object recognition algorithm to automatically track the precise location (with an accuracy of one foot or less) of every vehicle on a subsecond basis. Existing efforts have relatively high vehicle detection and tracking failure rates, which require some manual transcription to ensure appropriate vehicle capture rates. The current work presents a multistep procedure for reconstructing vehicle trajectories, which aims at eliminating the outliers that give rise to unphysical acceleration and deceleration and jerks and smoothing out the random disturbances in the data, preserving the driving dynamics (vehicle stoppages, shifting gears during acceleration and deceleration), and the internal consistency of the trajectory, that is, the space actually traveled [32]. In spite of these concerns, ongoing trajectory data collection efforts can make UAV video traffic data usable for studies on traffic flow theory.

3. Platform and Hardware

The DJI PHANTOM 2+ miniature UAV was employed, as shown in Figure 2, with the following characteristics.(i)The UAV has four rotors (quadcopter), 400 g payload, 25 min maximum flight time, and weight 1000 g.(ii)Precision flight and stable hovering were achieved using an integrated GPS autopilot system offering position holding, altitude lock, and stable hovering, which allows the controller to focus attention on video production.(iii)Automated return to home: if the vehicle is disconnected from the controller during flight, the system failsafe protection activates and transmits a message to the controller (if the signal is strong enough) and automatically navigates home.(iv)The flight path may be programmed from an iPad using the 16 waypoint ground station system, enabling the controller to shoot with more precision.

The camera employed was GoPro HERO3+ Black Edition, as shown in Figure 2, a commercially available “action” camera popular with athletes and extreme sports enthusiasts. It is capable of recording smooth, high definition video at various resolutions from 720 to 4000 p at 15–120 frames per second (fps). Although the GoPro has no zoom capability, there are three field of view settings (wide, medium, and narrow), which allow the user to focus the camera on wider or smaller areas. The camera is Wi-Fi capable and can be operated using a Wi-Fi remote or via the free GoPro App on a smartphone or tablet.

4. Aerial Video Calibration

4.1. Pinhole Model

The basic principle of camera calibration is the pinhole imaging model. The pinhole model describes the relationship between three-dimensional (3D) coordinate points in the camera and the imaging plane projection. The camera iris, rather than the lenses used to gather light, is described as a point. However, the pinhole model does not consider geometric distortion caused by the lens and the limited aperture or blur caused by lack of focus, nor does it consider discrete pixel coordinates in the real camera. Thus, the pinhole model constitutes a first-order approximate transformation for mapping 3D coordinates to 2D or planar coordinates, with accuracy that depends upon camera quality. Accuracy decreases with increased lens distortion from center to edge.

Homogeneous coordinates using the pinhole method of camera projection arewhere and are the image and world coordinates, respectively, corresponding to the homogeneous coordinates; is a scaling factor; is the external parameter matrix, converting between real and camera coordinates, where is the rotation matrix and is the world coordinate origin in camera centric coordinates; and is the internal parameter matrix:which includes five parameters associated with factors relating to lens focal length, image sensor system, and location of the main points.

4.2. Intrinsic Parameter Calibration and Distortion Correction

Intrinsic parameter calibration adopts a camera calibration algorithm based on HALCON software [33]. The calibration plate is a 7 × 7 lattice with 12.5 mm dot pitch. HALCON recommends that the camera should shoot at least eight images in different positions and poses and that the circle diameter in the final images should be >10 pixels. To minimize calibration error, 50 images were taken at various positions and poses. Figure 3 shows example images, and Table 1 shows the final calibration results.


Width of pixel type8.31Center-point coordinate ()1253.62
Height of pixel type8.30Center-point coordinate ()976.77
Focal length12.18Image width2560
Distortion factor−2112.24Image height1920

Nonlinear distortion during camera imaging can arise from several characteristics, including CCD manufacturing error, lens surface errors, axial spacing between each lens, and centralization errors. The LENZ distortion correction model was adopted to meet precision requirements [34]: where is the magnitude of radial distortion, negative values indicating barrel distortion and positive values indicating pillow distortion; is the original image coordinate point; and is the corrected image coordinate point. Knowing the distortion parameters after calibration is crucial to the correction process, as demonstrated in Figure 4.

4.3. External Parameter Calibration
4.3.1. External Parameter Calibration Algorithm

Traffic scenes tend to involve planar calibration, such as a road or channel plane. It is assumed that for the world coordinates. Therefore, the pinhole model can be simplified towhere and are the image coordinates after distortion correction and world coordinates, respectively. The internal parameter matrix, , was calculated above. Therefore, solving the 3 × 3 matrix, , can provide transformation between world and pixel coordinates.

There are nine unknown parameters in the matrix, with “1” providing a linear equation in homogeneous coordinates and four sets of coordinates providing eight linear equations. Therefore, equations can be obtained as exact solutions.

Because is orthogonal, the constraints are nonlinear equations that can be solved using the iterative method. Selecting reasonable initial values can greatly increase iterative speed. The initial value represents the solution of linear equations obtained by substituting four control points into the projection equation, which is relatively close to the real solution of the equation.

Let signify the elements of ; the constraints become

Let represent the overdetermined equations after adding constraints (7). Then

The least squares method was used to convert (8) to statically determinate nonlinear equations:

We may then apply a quadratic approximation of the Taylor expansion and solve the Jacobian matrix for each equation with Newton’s iteration method applied to the results.

4.3.2. Experimental Verification and Analysis

To validate the required number of calibration points for the proposed method and compare accuracy without adding constraints, the proximal and distal points of a known scene were selected as verification points (these two sets of points were not included in the solution of the parameter matrix). It is assumed that the world coordinates from field measurements and the pixel coordinates after distortion correction are the true values, and so the difference in calculated position relative to the true value was used to judge accuracy, as shown in Figure 5.

For small number of calibration points, distal point differences are larger than for proximal points. However, the differences tend to stabilize for seven or more calibration points without adding constraints, and only four points are needed when constraints are added. The difference does not approach zero except for certain values with increased measurement points. Given that multiple measurements reduce random error, this nonzero difference is systematic and arises mainly from internal matrix and distortion correction coefficient errors.

4.3.3. Simplified Aerial Shooting Calibration Algorithm

Measuring traffic parameters via UAV usually requires adoption of the horizontal downward viewing angle. However, when the camera imaging surface and road plane are parallel to one another, camera calibration can be simplified. Using the consistent ratio of image and world coordinates, length and speed parameters can be obtained quickly.

Transforming between the world coordinate system and camera coordinate system is undertaken through rotational matrix, , and translational matrix, .

The translational matrix, , is a 3D column, whereas the rotational matrix, , is the total product of the rotation angle encompassing the , , and axes:so that

When the camera imaging surface and road plane are parallel to one another, and , which simplifies

Taking the first two columns of and to form the 3 × 3 matrix from the previous section allows adoption of the simplified algorithm when is equal or close to

4.4. Error Analysis of Simplified Algorithm

To verify the accuracy of the simplified method under horizontal downward angle conditions, we selected the same calibration object within a road environment from different heights. The calibration object was a crosswalk line 4 m in length. The UAV hovered to shoot for 1 min every 10 m from 0 to 100 m. Example images are shown in Figure 6.

The camera mode was 16 × 9 W 1080 p and the parameters were , , and . The light sensitive component size of the GoPro HERO3+ is 1/2.5 inches, with the corresponding size being 5.38 mm × 4.39 mm. Fifteen sets of independent experiments were conducted for each height, with outcomes as shown in Figure 7.

The Kolmogorov-Smirnov method was used to verify whether the flight elevation data represented a normal distribution and set the significance level at . Results indicate that the data represent a significant normal distribution. A two-tailed -test was applied to verify that the results were equal to the true value at each flight elevation, with the significance level set at .: Calculation results of simplified algorithm the true value (4 m): Calculation results of simplified algorithm the true value (4 m)

From Figure 7 and Table 2, the error scope can be controlled within 0.3 m at elevations 80–100 m. The simplified algorithm results represent the true value under 100 m with significance level . The optical axis was not perpendicular to the road surface due to camera vibration and the length ratio of lower elevation images was larger. Therefore, the appropriate flight elevation is 100 m to obtain the best accuracy from the simplified algorithm.


Flight elevation (m)Test value = 4.0 m
Df (two-tailed)Mean difference90% confidence interval of the difference
LowerUpper

107.41514<0.0010.8200.6251.015
2010.24214<0.0010.4620.3820.541
309.61114<0.0010.3740.3050.442
4019.97614<0.0010.3020.2750.329
5021.96014<0.0010.2810.2590.304
6011.25214<0.0010.2250.1890.260
7012.36414<0.0010.1950.1670.223
803.093140.0080.0630.0270.099
903.126140.0070.0580.02520.090
1002.241140.0420.0250.0050.044

5. Practical Application

The proposed open data platform was applied to measure traffic parameters of the Shanghai inner ring expressway, which is an on-ramp located near Renaissance Park (Figure 8). This section of the expressway suffers extreme traffic congestion during peak hours, causing a major bottleneck that frequently causes traffic backups and accidents. Many traffic engineering researchers are interested in studying the unique features and driving behaviors associated with this specific network but lack accurate, large scale data sets. One reason is a lack of convenient tall buildings enabling traffic parameter collection using traditional digital cameras.

Approximately 160 minutes of aerial video was collected during peak traffic hours from the UAV flying at 100 m. The extracted images covered 180 m of the road. Vehicle space-time trajectories were extracted after camera calibration using methods described in the previous section (Figure 9(a)). Traffic volume, density, and speed were also gathered, as well as traffic parameters using the CA model. The lane (green backgrounds, Figure 9) is zoned into 28 cells (6 m/cell). Acceleration distribution characteristics were subsequently calculated (Figure 9(b)), followed by the proposed open data platform to verify, calibrate, and develop macroscopic, mesoscopic, and microscopic traffic models, which could be used to simulate and evaluate traffic strategies for the specific network.

To verify the quality characteristics of the recorded trajectories, another empirical study has been accomplished. We prepare a car equipped with high-precision GPS system (accuracy less than 0.3 m), which can be used to collect individual trajectory along with UAV video in peak hour condition (8:00–8:15) and free flow condition (10:00–10:15), respectively. Recording frequencies of the two modes are 10 Hz.

Vehicle spatial-temporal trajectory, speed, and acceleration can be extracted from UAV video based on the proposed camera calibration method and the existing moving target tracking algorithm. The detailed procedures are as follows.(i)In the first part, both intrinsic and external parameters of the camera should be calibrated by the proposed method.(ii)Then the noise caused by the unavoidable motion of the aircraft in all six degrees of freedom during the hovering phase was eliminated (hovering accuracy of the aircraft: vertical: 0.8 m; horizontal: 2.5 m; angle: 0.03°). Stationary and moving features in the frames are detected, matched, and categorized by Scale Invariant Feature Transform (SIFT) [35]. Then the stationary features are used to calibrate model parameters and complete image registration [36].(iii)Camshift algorithm is used to track moving target and a series of position coordinates are extracted [37].

Resulting speed and acceleration profiles are shown in Figure 10, respectively. The two curves of GPS data and UAV data are in good agreement under both peak hour condition and free flow condition.

The resulting frequency plot of relative errors under peak hour condition is shown in Figure 11. The mean and standard deviation of relative error are 1.7049% and 3.7619%, respectively. And relative errors are far from being normally distributed. This is confirmed by the Normal Probability Plot in Figure 11(b), which reveals that the distribution of relative errors deviates from normality especially under 25%, which are the biggest outliers. The mean of relative errors is significantly different from zero (as confirmed by the -test at the level of significance of 5%), which is symptom of a systematic error component. This may be caused by the errors of GPS system and a series of vehicle extracting algorithm. However, the collected traffic UAV video spatial-temporal data can satisfy microscopic details of traffic flow theory research under admissible error.

6. Conclusion

An open data platform for traffic parameter collection and traffic model verification and development was proposed incorporating multirotor UAV video, which can provide large scale, comprehensive pictures of 2D traffic situations. UAV video image processing is among the most attractive alternative new technologies, offering opportunities to perform substantially more complex tasks and provide precise, accurate, more extensive traffic parameters than other sensors. The UAV platform used for this study was a DJI PHANTOM 2+ with GoPro HERO3+ Black Edition camera.

The camera parameter calibration and image distortion correction algorithms were developed for aerial shooting conditions to offer a reliable and accurate method for collecting spatial-temporal data for traffic model calibration. Using internal camera calibration, the LENZ model was employed to provide aerial video distortion correction. For external calibration, two sets of matrices were added to the constraints based on the orthogonal properties of the rotational and translational matrices. Compared to the traditional Zhang model, the proposed method requires only four fixed points to calculate matrices, which decreases the large workload associated with aerial calibration. A simplified algorithm was proposed to transform between the world and camera coordinate systems when the camera imaging surface and road plane are parallel. Experimental results verified that the simplified algorithm is highly accurate and an appropriate elevation for video collection was 100 m.

Approximately 160 minutes of aerial video was collected during peak traffic hours on the Shanghai inner ring expressway using the proposed open data platform. Space-time trajectories of vehicles were extracted after camera calibration, as well as the macroscopic traffic flow parameters volume, density, and speed, and traffic parameters associated with the CA model. The empirical study by high-precision GPS system under peak hour condition and free flow condition can reveal the sufficiency and effectiveness of the data set to the microscopic details of traffic flow theory research.

Disclosure

The authors take sole responsibility for all views and opinions expressed in this paper.

Competing Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was based on the results of a research project, which was partly supported by a research grant from the Ministry of Science and Technology of the People’s Republic of China (2014BAG03B05) and partly supported by the Program for Changjiang Scholars and Innovative Research Team in University.

References

  1. M. Sarvi, M. Kuwahara, and A. Ceder, “Observing freeway ramp merging phenomena in congested traffic,” Journal of Advanced Transportation, vol. 41, no. 2, pp. 145–170, 2007. View at: Publisher Site | Google Scholar
  2. Y. Du, C. Zhao, X. Zhang, and L. Sun, “Microscopic simulation evaluation method on access traffic operation,” Simulation Modelling Practice and Theory, vol. 53, article no. 1485, pp. 139–148, 2015. View at: Publisher Site | Google Scholar
  3. J. Tian, M. Treiber, S. Ma, B. Jia, and W. Zhang, “Microscopic driving theory with oscillatory congested states: model and empirical verification,” Transportation Research Part B: Methodological, vol. 71, pp. 138–157, 2015. View at: Publisher Site | Google Scholar
  4. G. Gomes and R. Horowitz, “Optimal freeway ramp metering using the asymmetric cell transmission model,” Transportation Research Part C: Emerging Technologies, vol. 14, no. 4, pp. 244–262, 2006. View at: Publisher Site | Google Scholar
  5. Y. Gao, Y. Liu, H. Hu, and Y. Ge, “Modeling traffic operation at signalized intersections without explicit left-turn yielding rules with an enhanced cell transmission model,” Journal of Advanced Transportation, vol. 50, no. 7, pp. 1470–1488, 2016. View at: Publisher Site | Google Scholar
  6. S. Wolfram, “Universality and complexity in cellular automata,” Physica D. Nonlinear Phenomena, vol. 10, no. 1-2, pp. 1–35, 1984. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
  7. B. Chopard and M. Droz, Cellular Automata, Cambridge University Press, Cambridge, UK, 1998. View at: Publisher Site | MathSciNet
  8. C. Mallikarjuna and K. R. Rao, “Cellular Automata model for heterogeneous traffic,” Journal of Advanced Transportation, vol. 43, no. 3, pp. 321–345, 2009. View at: Publisher Site | Google Scholar
  9. K. C. Clarke, S. Hoppen, and L. Gaydos, “A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area,” Environment and Planning B: Planning and Design, vol. 24, no. 2, pp. 247–261, 1997. View at: Publisher Site | Google Scholar
  10. A. Puri, K. Valavanis, and M. Kontitsis, “Generating traffic statistical profiles using unmanned helicopter-based video data,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '07), pp. 870–876, Roma, Italy, April 2007. View at: Publisher Site | Google Scholar
  11. J. Hourdakis, P. G. Michalopoulos, and J. Kottommannil, “Practical procedure for calibrating microscopic traffic simulation models,” Transportation Research Record, no. 1852, pp. 130–139, 2003. View at: Google Scholar
  12. R. Balakrishna, C. Antoniou, M. Ben-Akiva, H. N. Koutsopoulos, and Y. Wen, “Calibration of microscopic traffic simulation models: methods and application,” Transportation Research Record: Journal of the Transportation Research Board, vol. 1999, 2015. View at: Publisher Site | Google Scholar
  13. M. Wang, A. Ailamaki, and C. Faloutsos, “Capturing the spatio-temporal behavior of real traffic data,” Performance Evaluation, vol. 49, no. 1-4, pp. 147–163, 2002. View at: Publisher Site | Google Scholar | Zentralblatt MATH
  14. P. Reinartz, M. Lachaise, E. Schmeer, T. Krauss, and H. Runge, “Traffic monitoring with serial images from airborne cameras,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 61, no. 3-4, pp. 149–158, 2006. View at: Publisher Site | Google Scholar
  15. A. Puri, Statistical Profile Generation of Real-Time UAV-Based Traffic Data, 2008.
  16. A. Puri, K. P. Valavanis, and M. Kontitsis, “Statistical profile generation for traffic monitoring using real-time UAV based video data,” in Proceedings of the Mediterranean Conference on Control and Automation (MED '07), IEEE, Athens, Greece, July 2007. View at: Publisher Site | Google Scholar
  17. S. Srinivasan, H. Latchman, J. Shea, T. Wong, and J. McNair, “Airborne traffic surveillance systems: video surveillance of highway traffic,” in Proceedings of the ACM 2nd International Workshop on Video Surveillance & Sensor Networks (VSSN '04), pp. 131–135, ACM, New York, NY, USA, October 2004. View at: Google Scholar
  18. L. Mussone, M. Matteucci, M. Bassani, and D. Rizzi, “An innovative method for the analysis of vehicle movements in roundabouts based on image processing,” Journal of Advanced Transportation, vol. 47, no. 6, pp. 581–594, 2013. View at: Publisher Site | Google Scholar
  19. L. Sun, Y. Pan, and W. Gu, “Data mining using regularized adaptive B-splines regression with penalization for multi-regime traffic stream models,” Journal of Advanced Transportation, vol. 48, no. 7, pp. 876–890, 2014. View at: Publisher Site | Google Scholar
  20. K.-H. Bethke, S. Baumgartner, and M. Gabele, “Airborne road traffic monitoring with radar,” in Proceedings of the 14th World Congress on Intelligent Transport Systems (ITS '07), pp. 1895–1900, Beijing, China, October 2007. View at: Google Scholar
  21. Z. Kim, “Realtime road detection by learning from one example,” in Proceedings of the 7th IEEE Workshop on Applications of Computer Vision (WACV '05), pp. 455–460, January 2005. View at: Publisher Site | Google Scholar
  22. Y. M. Chen, L. Dong, and J.-S. Oh, “Real-time video relay for UAV traffic surveillance systems through available communication networks,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '07), pp. 2610–2614, Hong Kong, China, March 2007. View at: Publisher Site | Google Scholar
  23. E. D. McCormack and T. Trepanier, The Use of Small Unmanned Aircraft by the Washington State Department of Transportation, Washington State Department of Transportation, 2008.
  24. F. Heintz, P. Rudol, and P. Doherty, “From images to traffic behavior—a UAV tracking and monitoring application,” in Proceedings of the 10th International Conference on Information Fusion (FUSION '07), pp. 1–8, IEEE, Quebec City, Canada, July 2007. View at: Publisher Site | Google Scholar
  25. R. Austin, Unmanned Aircraft Systems: UAVS Design, Development and Deployment, John Wiley & Sons, New York, NY, USA, 2011.
  26. Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation into object space coordinates in close-range photogrammetry,” in Proceedings of the Symposium on Close-Range Photogrammetry, pp. 1–18, Urbana-Champaign, Ill, USA, January 1971. View at: Google Scholar
  27. Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the 7th IEEE International Conference on Computer Vision (ICCV '99), pp. 666–673, Kerkyra, Greece, September 1999. View at: Google Scholar
  28. A. Heyden, Geometry and Algebra of Multiple Projective Transformations, Lund Institute of Technology, Lund, Sweden, 1995. View at: MathSciNet
  29. A. Heyden and K. Astrom, “Euclidean reconstruction from image sequences with varying and unknown focal length and principal point,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 438–443, IEEE, San Juan, Puerto Rico, USA, June 1997. View at: Google Scholar
  30. F. Espuny, “A new linear method for camera self-calibration with planar motion,” Journal of Mathematical Imaging and Vision, vol. 27, no. 1, pp. 81–88, 2007. View at: Publisher Site | Google Scholar | MathSciNet
  31. V. Alexiadis, J. Colyar, J. Halkias, R. Hranac, and G. McHale, “The next generation simulation program,” ITE Journal (Institute of Transportation Engineers), vol. 74, no. 8, pp. 22–26, 2004. View at: Google Scholar
  32. M. Montanino and V. Punzo, “Trajectory data reconstruction and simulation-based validation against macroscopic traffic patterns,” Transportation Research Part B: Methodological, vol. 80, pp. 82–106, 2015. View at: Publisher Site | Google Scholar
  33. W. Eckstein and C. Steger, “The Halcon vision system: an example for flexible software architecture,” in Proceedings of the 3rd Japanese Conference on Practical Applications of Real-Time Image Processing, pp. 18–23, Technical Committe of Image Processing Applications, Japanese Society for Precision Engineering, 1999. View at: Google Scholar
  34. J. Weng, P. Coher, and M. Herniou, “Camera calibration with distortion models and accuracy evaluation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 10, pp. 965–980, 1992. View at: Publisher Site | Google Scholar
  35. T. Lindeberg, “Scale invariant feature transform,” Scholarpedia, vol. 7, no. 5, Article ID 10491, 2012. View at: Publisher Site | Google Scholar
  36. Z. Xin, Y. Chang, L. Li et al., “Algorithm of vehicle speed detection in unmanned aerial vehicle videos,” in Proceedings of the Transportation Research Board 93rd Annual Meeting, (14-2616), Washington, DC, USA, January 2014. View at: Google Scholar
  37. J. G. Allen, R. Y. D. Xu, and J. S. Jin, “Object tracking using camshift algorithm and multiple quantized feature spaces,” in Proceedings of the Pan-Sydney Area Workshop on Visual Information Processing, pp. 3–7, Australian Computer Society, 2004. View at: Google Scholar

Copyright © 2017 Yuchuan Du et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views1339
Downloads871
Citations

Related articles

Article of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles.