Table of Contents Author Guidelines Submit a Manuscript
Mobile Information Systems
Volume 2018, Article ID 8501898, 14 pages
Research Article

A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry

Department of Electronics Engineering, Kookmin University, Seoul, Republic of Korea

Correspondence should be addressed to Yeong Min Jang;

Received 16 December 2017; Revised 15 February 2018; Accepted 11 March 2018; Published 8 April 2018

Academic Editor: Jeongyeup Paek

Copyright © 2018 Md. Tanvir Hossan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


The demand for autonomous vehicles is increasing gradually owing to their enormous potential benefits. However, several challenges, such as vehicle localization, are involved in the development of autonomous vehicles. A simple and secure algorithm for vehicle positioning is proposed herein without massively modifying the existing transportation infrastructure. For vehicle localization, vehicles on the road are classified into two categories: host vehicles (HVs) are the ones used to estimate other vehicles’ positions and forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV transmits modulated data from the tail (or back) light, and the camera of the HV receives that signal using optical camera communication (OCC). In addition, the streetlight (SL) data are considered to ensure the position accuracy of the HV. Determining the HV position minimizes the relative position variation between the HV and FV. Using photogrammetry, the distance between FV or SL and the camera of the HV is calculated by measuring the occupied image area on the image sensor. Comparing the change in distance between HV and SLs with the change in distance between HV and FV, the positions of FVs are determined. The performance of the proposed technique is analyzed, and the results indicate a significant improvement in performance. The experimental distance measurement validated the feasibility of the proposed scheme.

1. Introduction

Localization refers to the process of identifying the location (x and y coordinates in two-dimensional (2D) space and x, y, and z coordinates in three-dimensional (3D) space) of an object in a certain point in space at a specific time. Several studies are contributing to the development of accurate localization schemes owing to increased demand for Internet of Things (IoT) applications. The necessity of a localization scheme is integrated within the requirement of IoT. IoT relies on an enormous number of physical objects (e.g., sensor nodes and sensor networks) that are connected via the Internet [1]. These objects can be interconnected to each other either via wire or wireless mediums. A localization scheme is an important concern for connecting sensor nodes in remote location. A node cannot access or wirelessly communicate with other nodes without accurately positioning itself. The characteristics of localization schemes vary with the features of indoor and outdoor environments [2].

It is well known that localizing sensor nodes indoor can be a crucial obligation for modern businesses and commerce. However, issues related to outdoor localization, particularly vehicle localization, are prioritized over indoor localization. Recently, following road traffic safety [3] has become important owing to the increasing number of fatal road accidents. World Health Organization statistics [4] show that traffic-related accidents worldwide resulted in 1.3 million deaths of people between 15 and 29 years, and the number of nonlethal injuries is 15–40 times greater (between 20 and 50 million). Thus, traffic fatalities rank among the 10 top causes of death, comparable to suicide, HIV/AIDS, homicide, and other diseases. The most common cause of traffic fatalities (around 60%) is high vehicle speeds (above 80 km/h) on the road [5]. Autonomous vehicles can help minimize traffic deaths. Meanwhile, the demand for autonomous vehicles has been rising dramatically to avoid accidents [6]. Furthermore, outdoor localization is of prime importance in the transportation domain, particularly, for autonomous vehicles, which requires localizing other vehicles from the host vehicle (HV) in road environments such as highways. For autonomous vehicles, the features of localization are classified as active and passive. Active features include setting region of interest (ROI) and measuring the possibility of communicating with other vehicles and maintaining safe distance from other vehicles to avoid unwanted collisions by measuring spatial and temporal scenarios [7]. Passive features include obtaining localization information from individual vehicles, which can then be accumulated by a traffic control center and utilizing in effective way to mitigate traffic congestion.

1.1. Existing Solutions, Limitations, and Current Trends in Vehicle Localization

Global positioning system (GPS) is considered as the most prominent solution for outdoor localization scheme. GPS provides a line of sight vehicle localization solution using the sensor information from [810] and data from a satellite orbiting at an altitude of approximately 20,000 km. GPS uses the radio frequency (RF) band for positioning the HV on road. However, the HV cannot measure its own distance from other vehicle, such as the forwarding vehicle (FV), via GPS; it offers only the current location of the HV. Moreover, this localization scheme is fraught with several challenges, such as GPS signals being blocked by obstacles such as buildings, subway, tunnels, and trees. Localization using GPS can generate a localization error of up to 1 m within 10 s [11]. A wireless network standard for vehicle states, called IEEE 802.11p [12, 13], is available; this is referred to as wireless access in vehicular environments (WAVE) [14]. This standard is used to maintain a communication network among vehicles within vehicular ad hoc networks (VANETs) [15] and to support intelligent transport system applications. RF signals in VANET systems are used for communication and vehicle localization [16]. Owing to various environmental effects and the multipath nature of the network, non-Gaussian noise is included with the transmitted signal, whose strength shows nonlinear characteristics over distance. The WAVE standard uses a license-free RF band (i.e., 2.4 GHz) [17], which is open to interference from other signal sources, thus making the entire network vulnerable from the viewpoints of both communication and localization. Other existing technologies for vehicle localization include light detection and ranging (LiDAR) [1820] and the time-of-flight (ToF) camera technique [2124]. Light-emitting diodes (LEDs) and cameras or photodiodes are embedded in LiDAR and ToF system infrastructures; however, they are used only for detection and ranging. These equipment are not useful for vehicle-to-vehicle or vehicle-to-infrastructure communications [2529] and are expensive to be used in a vehicular environment.

Optical wireless communication (OWC) is an emerging and promising technology [30] that is viable for handling scenarios wherein RF faces challenges. OWC is not intended to replace RF; however, the coexistence of both can provide a better solution [31] for communication and localization. Optical camera communication (OCC) [32] is a subarea of OWC that uses a camera as a receiver to decode signals from a modulated light source, for example, LEDs, by varying the state of the light source to transmit binary data via optical channels. It is a secure, safe, reliable, and fast method for communication as well as localization [33]. A unique feature of OCC is that the camera used for vehicle localization can simultaneously be used to communicate with other vehicles that transmit signals using modulated lights. With little modifications, LEDs in existing infrastructure, that is, vehicles and streetlights (SLs), can be used for communication (e.g., bidirectional communication between two vehicles or between vehicles and infrastructure) [3440].

To better communicate in outdoor environments, vehicles around the HV must be localized precisely. More importantly, multiple-input and multiple-output (MIMO) features of OCC [41] should allow the HV to simultaneously communicate with more than one vehicle. In [42], author presents a received signal strength-based visible light communication localization scheme, but it could not improve localization performance such as more complex models of the environment or additional hardware are required for localization. The localization of multiple vehicles would require incorporating OCC and photogrammetry technologies [43]. Photogrammetry [44, 45] deals with a branch of geometry wherein an image sensor (IS) is used to measure an object by quantifying the photon intensities of different wavelengths of light incident on an area, that is, a unit pixel of a camera. Photogrammetry helps accumulate information on semantic and geometric properties and variation of relative distances of objects, which refers to vehicles in this context. This vehicle location information can be shared with following vehicles with the help of OCC and rear-facing LED lights. Figure 1 shows a vehicle localization scheme combining OCC and photogrammetry.

Figure 1: OCC and photogrammetry-based vehicles localizing by comparing relative position with the help of streetlights.

A vehicle localization technique, wherein each FV broadcasts its identity (ID) to the HV as FV-ID, is proposed herein. After extracting the unique ID from the received signal, the HV can distinguish an FV from other FVs. Since the HV and FV simultaneously change their positions over time, location of the HV should be normalized based on the location of a fixed object, for instance, an SL. Comparing the locations of more than one SL relative to the HV, a virtual location of the HV can be temporarily generated. This HV location information acts as the origin of a Cartesian coordinate system that allows determining the FV location relative to the HV.

For autonomous vehicle localization, infrared LED array can be attached at the SLs and back side of the FVs to clarify the area of the LED array. Though the near-infrared (NIR) source is visible to the camera, it is possible to receive data from the FVs and SLs. Detecting light intensity for the nearest light sources is much higher on the IS of the camera of HV rather than far distance light source. Compared with visible light-based communication, NIR-based communication is influenced by the optical channel. Under daylight, it is challenging to receive data from an NIR-based transmitter. Importantly, a recent development of high-dynamic range imaging technique reduces noise and enhances the image quality under daylight [46, 47]. Therefore, it is expected that ambient light no longer poses a problem for OCC, even when the transmitter possesses an NIR-optical band. Simulation results show vehicle localization accuracy with considering the impact of several parameters including signal-to-interference-plus-noise ratio (SINR), IS resolution, camera exposure time, and the distance between two SLs.

The remainder of this paper is organized as follows: Section 2 explains a detailed theoretical and mathematical model of our proposed scheme. Experimental setup for distance measurement is shown in Section 3. In Section 4, the simulation results associated with vehicle localization studies are presented. Finally, Section 5 presents a summary of lessons learned and concludes this study.

2. Development of Proposed Scheme

Almost every vehicle produced in recent years is equipped with a camera (i.e., less than 30 frames/sec) that is used to monitor the outdoor scenarios and to assist the drivers by providing a view of their blind spot. Herein, the HV communicates with the FV and measures the distance between vehicles using such a camera mounted in front of the vehicle. Using OCC, this camera detects transmitted signal IDs, such as FV-ID and SL-ID from each FV and SL simultaneously. A pair of taillight on FVs transmits ID in different phases to modulate the data using a modulation scheme called spatial two phase-shift keying (S2-PSK) [48] to the HV’s camera. These LEDs transmit at a constant clock rate (e.g., 125 or 200 Hz) to send a flicker-free signal. The SLs use the same modulation scheme as the FV for transmitting the SL-ID. MIMO is a distinctive functionality of a camera that helps distinguishing FV-ID from SL-ID. These IDs are required to determine the ROI for vehicle localization. The ROI specifies the camera’s viewing region within an image and helps minimize the scope of false-position results from the main event. On the road, an FV can move side to side or change its direct distance with respect to the HV, which we stated as horizontal shift and vertical shift, respectively. These position shifts lead to a change in image size that can be measured from the IS. Both the FV and HV move simultaneously; therefore, it is not always possible to localize the position of the FV relative to the HV. However, if the position of the HV is known, the relative positions of the FV and HV can be easily compared. The position of SL is fixed relative to every vehicle on the road; therefore, it is necessary to receive SL-IDs from the SLs to determine the HV position. Figure 2 shows a flowchart of the proposed localization scheme wherein the FV location information is compared with the current HV location to identify special and temporal cases. After receiving IDs, algorithm will move ahead if the size of detecting image area of FV is greater or equal to unit pixel area. In decision symbol of the algorithm, the threshold value indicates the minimum distance between HV and FVs to avoid collision.

Figure 2: Flow chart for a vehicle localization technique based on OCC and photogrammetry.
2.1. LED-ID from SL and FV

In Figure 3, the two LED pairs fixed on the back of the FV transmit a modulated FV-ID [48]. The SL transmits the SL-ID using the same modulated signal (i.e., S2-PSK) by dividing a single LED array into two pairs of LED arrays. Depending on the input bit sequence, the transmitting signal phases of the LED array pairs can differ. The scheme uses a symmetric Manchester symbol to map each LED symbol. Using a spatial under-sampling approach, the LED pairs transmit in the same phase for bit 0 and in different phases for bit 1. The bit interval for one of the tail LEDs is as follows:where is an unsigned integer of bit-interval cycles, is a bit interval, and is the cyclic interval of signal.

Figure 3: Coding and decoding LED-ID using OCC.

The bit interval for other of tail LED is as follows:

From the same camera image, the S2-PSK demodulates a bit from a pair of states of two different LEDs. At sampling time , the same states of two tail LEDs on the same image resemble bit 0, otherwise bit is 1. XOR operation determines the value of bit captured in the same image as follows:where and are the states of two LEDs at sampling time .

Compared with other modulation schemes (e.g., undersampled phase shift on-off keying) [49], this demodulation can gain a lower bit error rate (BER) within an image. A nonlinear XOR classifier can remove the remaining BER. The BER performance of this modulation scheme [41] is stated as follows:where is the bit error probability of the LED state and is the error rate enhancement.

Considering environmental effects, the SINR [50] is expressed as follows:where is the optical-to-electrical conversion efficiency at the camera, is the noise power spectral density, is the modulation bandwidth, is the optical channel gain, is the channel gain for interfering light sources, average optical power, and is the conversion between average electrical power and average optical power .

Meanwhile, the optical channel gain is expressed as follows:where is the Lambertian index, is the physical area of IS, is the distance between transmitter and receiver, is the angle of incidence, and is the angle of irradiation.

2.2. Camera Calibration and Photogrammetry

In computer vision applications, camera calibration is essential to determine real-world coordinates from simple 2D images. The simplest camera calibration method involves using a pinhole camera model to provide a perfect perspective transformation [51]. In a Euclidean coordinate system, the origin of the projected object coordinates is shifted from the principal point of the camera’s image plane, as shown in Figure 4. Mapping an object’s Euclidean three-space coordinates to the Euclidean two-space allows for the mapping of an object from the real world to image coordinates as follows:where is the focal length of the camera and are the principal point coordinates of the camera.

Figure 4: Camera calibration for vehicle localization.

Homogeneous vectors allow us to map the coordinates of the real world, and an image in terms of matrix multiplication as follows:where and represent the focal length of the camera in terms of pixel area along the and directions, respectively; is the skew parameter, and it is normally zero; is the camera’s orientation relative to real-world coordinates; and denotes the camera’s coordinates. Here, and denote the number of pixels per unit distance, expressed as image coordinates in the and directions, respectively. Equation (8) can be expressed succinctly as follows:where is the calibration matrix of the camera, is the coordinate matrix in a world coordinate frame, and is an identity matrix.

Let the distance from the LED to the camera lens be , and the distance from the focal point of the camera to the projected image on the IS be . Then, the ratio of LED distance and image distance is distance as follows:

Image area calculation performance depends on and , and it must satisfy the condition . Therefore, is equivalent to . The ratio of height and width of an LED and the same ratio of the projectile image are known as the magnification of camera lens. This ratio is similar to the ratio of LED distance and image distance , which is described as follows:

The number of pixel on IS for a particular object is the ratio of projected image area to the unit pixel area of the IS. In an IS, the unit pixel length is , unit pixel area is , and is the area of the LED light source. Thus, the following equation can be stated from (11) as follows:

Distance is always an absolute value; therefore, the negative sign in (12) can be discarded. If the camera focal length and unit pixel length are maintained constant for a certain camera, the distance of the LED is kept proportional with respect to the square root of the LED’s area and disproportional with respect to the square root of pixel area of that LED on the IS [39].

2.3. Determining HV Position

The origin in coordinate systems, such as the Cartesian and polar coordinate systems, is required to determine the position of one or more objects in either 2D or 3D space. In an outdoor environment, nearly every vehicle frequently changes its location; the measurement distances from HV to FV are not always accurate because of subsequent variations in their relative positions over time. Therefore, the location of FV from the origin or any stable location cannot be measured, and it is better if the HV location is known throughout this period. The shift in the FV’s location can be measured by comparing is location with the current location of the HV.

In our proposed localization scheme measures, the HV’s location by comparing it with the location of SLs. SLs’ location is always fixed with respect to other vehicles within this mobile scenario. This distance comparison yields location information for the HV, which also represents its virtual coordinates. To ensure the accuracy of this measurement system, location information from the onboard diagnostic II (OBD II) system and SLs is combined by the HV. The SL-ID should contain unique information that helps to distinguish this ID from other transmitted signals, such as FV-ID. The header of the SL-ID indicates that this ID belongs to a specific SL. In addition, other information, such as height of the SL from the ground and distance between two SLs on the same road, can be added after the header of the ID. At the same direction, there is a similarity among all SL-IDs of the SLs and a unique value within the IDs increasing or decreasing gradually.

After selecting the ROI, the distance between the camera and LED of the SL is measured using photogrammetry. Figure 5 shows the change in getting an SL-ID within the field of view (FOV) of the camera owing to a change in the HV position. The two axes are used to show the midpoint of IS. In Figure 5(a), the ID from the SL shows SL-ID#1∼SL-ID#4 at time t. These IDs vary from SL-ID#2∼SL-ID#5 at time (t + 1), which is shown in Figure 5(b). The size of the projectile image area of the nearest SL occupies a greater area on the IS compared with other SLs. The direct distance is calculated using (12); the distance for SL-ID#1 is shorter compared with that for SL-ID#4, as shown in Figure 5(a).

Figure 5: SL-IDs change with the change of HV’s position from (a) at time t to (b) at time (t + 1).

Using OCC, the camera decodes the SL-IDs of the SLs. Figure 6(a) shows that the SL’s height is , and the constant distance between two SLs is , where is related to the number of SL. Using photogrammetry, is determined as the measured direct distance between the camera and SL’s LED, where states the number of iteration sequence over a period. The horizontal distance between the camera and SL is . These horizontal distances are calculated by applying Pythagorean theorem on a right triangle where and SL’s height are the remaining two sides of that right triangle.

Figure 6: Obtaining virtual coordinates from (a) road scenario (geometric view (side)) and (b) measuring vertical distance from the HV to pavement (geometric view (top)).

From the top geometric view, a few triangulations can be generated after decoding this distance information. At a certain time, applying Pythagorean theorem again to triangles CSL1Ht and CSL1SL2, we obtainwhere is the horizontal distance from the camera to the pavement, is the distance between the cross point of the horizontal line and the shortest distance from the cross point to the SL, is the horizontal distance for SL1, and is for SL2 shown in Figure 6(b). In all cases, . We combine (13) and (14) as

The horizontal distance is determined from the camera to the pavement by combining (13) and (15) as follows:

The position of HV is a function that varies with the horizontal position , which is always positive; angular position of the SL relative to the HV; SL’s LED image area is on IS; and velocity of HV is . When the HV moves, the parameters related to the HV’s position change. If the initial position is recorded at time ; after , the position of HV states as follows:

The horizontal distance between the HV and the pavement is a function of the horizontal direct distance , distance between two SLs, that is, ; and distance between the cross point of the horizontal line and the shortest distance from the cross point to the SL, that is, , where all these values are also changed according to the change of the angular position .

The direct distance between the SL and HV depends on the area of the SL’s LED on the IS and the angular position . If the value is less than the unit pixel area (this will happen when the position of the SL is too far from the HV); the value of image area is ignored from the calculation. On the other hand, angular position changes with the bending of the road (or the edge of a road) and changes accordingly. Therefore, the following expression is stated for measuring direct distance:

From initial time to , the changes of angular position is found by simply comparing current angular position with previously recorded value as follows:

The horizontal distance is set as the x coordinate and is set as the y coordinate for the HV with respect to the nearest SL. Therefore, when the HV moves, this distance information is updated. Figure 7 shows the flowchart for calculating and correcting horizontal position information. From (20), a considerably initial angular position compared with the conjugate angular position indicates the presence of a ground curvature. In such cases, infrastructure effect optimization is required; otherwise, the horizontal distance can be easily updated.

Figure 7: Flowchart of detailed development of the HV’s location information.
2.4. Determining the Position of FVs using the Position of HV

Each vehicle has a pair of headlight and taillight. Using OCC, taillights of FVs transmit modulated signals to a receiver, that is, the camera of the following vehicle. Using this modulated signal, the FV transmits emergency information along with some basic vehicle information, for example, the area of the single light from the rear of the vehicle. This transmitted signal from one pair of taillights is noted as FV-ID, and one ID is unique compared to other vehicles’ IDs.

For a proper communication among vehicles (i.e., FVs and HV) on the road, the signal transmitted from both taillights must be received by the camera. There are scenarios in which signal interruption can occur; for instance, the HV may monitor two vehicles from an angle wherein one of the lights from a single vehicle is covered by the other vehicle. In this case, data extraction is not possible although a single light signal is received by the HV’s camera. Moreover, two vehicles can be differentiated using their FV-IDs even if they are moving in parallel. The advantages of LED-ID-based vehicle identification make it possible to fix the ROI, which is the preliminary condition for successful communication and localization. Figure 8 shows every FV broadcasting FV-IDs along with the SL as SL-IDs. In Figure 8(a), the background is turned black by controlling the shutter speed of the camera, which is mounted in the HV. After demodulation and decoding of the transmitted signal, all IDs are accumulated, as shown in Figure 8(b).

Figure 8: Using OCC, (a) selecting region of interest and (b) receiving IDs.

The area of taillight LED arrays on the IS changes relative to changes in the distance between the HV and FV. By calculating the area of these images on the IS, two types of FV position shifts can be determined with respect to the HV, namely, horizontal shifting and vertical shifting. Horizontal FV shift will be visible if the vehicle changes its position from side to side. Concurrently, the vehicle can slow down, changing the direct distance between the FV and HV, which is defined as vertical shift. Figure 9(a) shows an image of the taillight LED that is projected on the left side of the image after being refracted by the camera lens. The image of the original light source is on the HV’s right side. Furthermore, Figures 9(a) and 9(b) show that the vehicle is moving from the right to the middle and later to the left with respect to the HV. By contrast, in Figure 10(a), the area of the projected image is smaller than that of the other two Figures 10(b) and 10(c), which shows that the FV was initially far from the HV and that this distance decreased gradually.

Figure 9: A pair of FV’s taillights moves from the left to the middle and finally to the right on the image sensor; implying that the FV is moving right to left with respect to the HV. (a) FV at the right side of the HV. (b) FV stays straight of the HV. (c) FV at the left side of the HV.
Figure 10: A pair of FV’s taillight increases in size gradually from left, middle, and right on the image sensor; implying that the FV is moving closer to the HV. (a) FV keeps safe distance from HV. (b) FV stays min. distance from HV. (c) Critical distance between FV-HV.

Generally, an IS consists of a 2D pixel array of photodetectors and transistors, vertical and horizontal access circuitry, and readout circuitry. Each and every pixel is accessed by the access circuitry, and readout circuitry helps to read the signal value in the pixel. In dense traffic scenarios, the angular position of the FV from HV helps to alleviate the position measurement error. Therefore, at the middle of the IS, a plate is considered as the center plane as in Figure 11 which vertically separates the IS into two. With respect to this plane both angular displacement of FV and horizontal displacement on the IS for corresponding FV can be measured. Here, is the number of received FV-ID by the HV’s camera. In Figure 11, different image colors on the IS distinguish one FV-ID from other FV-IDs. The angular displacement of FV is always zero when the FV locates at center plane. Otherwise, the numerical value of angular displacement helps us to mitigate some challenges from FV positioning, for example, depth estimation, lane changing information, and position estimation error mitigation for left-right side of the road. The calculated horizontal displacement on the IS for corresponding FV is the function of FV’s taillight image area and depends on angular displacement of FV as follows:

Figure 11: Measurement of vehicular angular position from horizontal displacement on the image sensor.

FV’s position can be determined by comparing with the position of HV and taillight image area of FV, horizontal displacement on the IS for corresponding FV, and the speed of FV, that is, . Overtime, these parameters will change, and consequently, the position of the FV will change. If is the initial time, then possible position of FV at is as follows:

3. Experimental Distance Measurement

Distance measurement using a camera is one of the important steps in the proposed scheme. Figure 12 shows the experimental setup and distance measurement procedure performed using our existing facilities under an ambient light environment. A circular LED light was used to transmit the signal. A smartphone camera was used as the receiver. With movement of the smartphone, the observed distance changed. Figure 13 shows the results of experimental distance measurement. The result shows the percentage error in measurement with respect to the actual distance. The error resolution seems to remain within 1% for most distance measurements. Although the experiment could not be performed in a real vehicle environment owing to lack of facilities, this distance measurement experiment validated the feasibility of the proposed scheme.

Figure 12: Experimental setup for distance measurement: (a) LED light at the distance of 600 mm and (b) LED light at the distance of 2550 mm from the camera.
Figure 13: Experimental measured distance value versus error resolution.

4. Simulation Results

Several factors and environmental impacts must be considered for achieving localization accuracy. We considered a smooth surface to ignore the turbulence caused by vehicle movements and the impacts of other bad weather conditions (e.g., fog, snow, and rain) for generating the simulation results. The effect of a single parameter on vehicle localization accuracy was considered, whereas the other parameters were maintained constant. Table 1 lists the transmitter parameters for transmitting and summarizes the specifications of the receiver (i.e., camera) and the optical channel environment.

Table 1: Transmitter and receiver parameters for simulation results.

A low-pass filter-like Gaussian filter is used to estimate the BER performance of the OCC system with respect to the SINR as a blurring filter for image processing. In this case, the variance (= 0.5) for channel filtering is considered zero in ideal state. The curves for the case of estimated variances of the Gaussian filter are plotted in Figure 14 to evaluate the influence of the estimation error of the channel filter. In this regard, S2-PSK modulation technique-based OCC system shows better BER performance with respect to SINR.

Figure 14: SINR versus BER.

Data rate is depending on the camera frame rate. It is possible to detect one-bit data from one camera frame. For instant, 30-bit data can be received from the camera which frame per second (fps) is 30. In S2-PSK, the Manchester coding is used for data encoding. Therefore, half of total bits per second (bps) generates from 30 fps camera, that is, 15 bps. Figure 15 shows BER performance of the camera receiver with varying data speed. For simulation, result is formulated for 1 bps, 2 bps, 5 bps data speed and required LED power is increased accordingly.

Figure 15: LED power versus BER.

The distance error occurs when there is a discrepancy between actual and measured values of distance. The systematic error caused by environmental facts, surveillance approaches, and tool leads to this mismeasurement in such dynamic vehicular environment and needs to be minimized to achieve better positioning accuracy. The average error takes from series of repeated measurements, whereas the maximum error generates from single measurement. The association of the distance error with different camera parameters in the point of average and maximum errors helps us to improve the performance of distance measurement approach. IS resolution is an important camera parameter which is defined by the number of total pixels and has an impact on distance calculation. It is possible to calculate the area of LED array more precisely if the camera resolution is higher. Higher resolution provides the detail about the detected LEDs to measure their area on the IS. Lower resolution will cause more errors in distance measurement. In Figure 16, at 1 megapixel, both maximum and average distance calculation errors are higher, that is, 17.5 cm and 13.3 cm, respectively. From 5 to 10 megapixels, the maximum distance measurement error is varying linearly, whereas the average error is fixed.

Figure 16: Image sensor resolution of the camera with respect to the distance measurement error.

In our proposed scheme, the camera should receive signal from LEDs at very high speed moving scenario. During this dynamic scenarios, the IS should completely expose under the illumination with every detail of the targeted LEDs, that is, streetlights and taillight of forwarding vehicles. The exposure time (or shutter speed) of the camera will ensure a period when the amount of light will be exposed on the IS. In the high-speed vehicular case, large exposure time of the camera will cause a blurred image and short exposure time will allow us to capture detailed flashes of light from a target object. Due to the dependency on the received image quality at IS, exposure time has an impact on evaluating the performance of distance calculation. Both average and maximum distance measurement errors show equivalent evolvements with the exposure time of camera IS in Figure 17. Localization accuracy is better at lower exposure time (i.e., 1/2000). However, when the exposure time is 1/15 in a second, the distance measurement error is maximum (i.e., 18.8 cm) at maximum error case.

Figure 17: Camera exposure time with respect to the distance measurement error.

In the mobile environment, speed and position shift; these are two important factors that cause effect on vehicle distance measurement. We are considering zero shifting of FV with respect to HV to simulate the effect of speed of FV on the distance calculation error. The speed of FV is varying from 0 to 110 km/h within 200 m distance, whereas the speed of HV is considered constant, that is, 30 km/h, during simulation period which is plotted in Figure 18. Therefore, at the very beginning, the distance measurement error will occur due to the speed of HV with respect to the FV. With the increase of the FV’s speed, both average and maximum distance measurement errors are increased gradually up to 110 km/h speed. This distance measurement error occurs due to the execution time required for position calculation by the HV.

Figure 18: Varying speed of FV with respect to the distance measurement error when its position shift maintains at zero value.

At constant vehicular speed (i.e., 50 km/h), position accuracy of HV is measured wherein the distance between SLs varied from 10 to 150 m. From Figure 19, it can be seen that as the internal distance between the SLs increases, the accuracy decreases. Moreover, at the beginning, the simulation results show that the distance measurement accuracy is relatively lower owing to the speed of the HV and data extraction from the SLs. At 50 km/h, the distance between two conjugate SLs is very small, that is, 10 m. Therefore, within a very short period, the number of SLs crossed is greater compared to the case wherein the distance between two SLs is 40 m. At the highest point of the graph, it simplifies that the distance between SLs and receiving SL-IDs well execute to get better distance measurement accuracy, that is, nearly about 90%. In addition, the number of SLs influences the performance calculation. Performance improves as the number of SLs increases. This ensures the possibility of obtaining a greater number of SL-IDs at the same time and calculating the position of the HV more precisely. Furthermore, as the distance between SLs increases, the chance of comparing the location information of the SLs for accurate HV positioning is minimized. As a result, the slope of distance measurement accuracy moves downward. The variation of the lines maintains a constant margin of up to 150 m because SL-IDs have not been obtained yet.

Figure 19: Recording the effect of distance between street lights on the measurement accuracy of HV’s position.

5. Conclusions

A vehicle localization technique in an outdoor environment is proposed herein. The technique employs photogrammetry, which is a novel idea for localization. Implanted OCC with photogrammetry improved vehicle localization performance. The proposed technique was used to measure the distance between HVs and FVs by calculating the image area on the IS. Beforehand, the HV receives FV-IDs from each FV and uses OCC to decode these IDs. The HV’s current location information helps mitigate the possibilities of relative position shifts among the HV and FV. The SL communicates with the HV in the same way as the FVs. Location information of the HV is accumulated by comparing the location of SLs with the HV’s OBD II system. Experimental distance measurement confirmed the feasibility of the proposed scheme. Overall distance measurement errors were within 12–20 cm, wherein a change in one parameter was considered. The sizes of the tail LEDs of FVs are different; recognition of such LEDs is out of the scope of this study. A deep learning-based algorithm will be required to boost the performance of this single camera to overcome all challenges related to vehicle detection and localization.

Conflicts of Interest

The authors declare that they have no conflicts of interest.


This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Global IT Talent support program (IITP-2017-0-01806) supervised by the IITP (Institute for Information and Communication Technology Promotion).


  1. A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, “Internet of things: a survey on enabling technologies, protocols, and applications,” IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2347–2376, 2015. View at Publisher · View at Google Scholar · View at Scopus
  2. J. Wang, R. K. Ghosh, and S. K. Das, “A survey on sensor localization,” Journal of Control Theory and Applications, vol. 8, no. 1, pp. 2–11, 2010. View at Publisher · View at Google Scholar · View at Scopus
  3. World Health Organization, Global Status Report on Road Safety: Time for Action, WHO Press, Geneva, Switzerland, 2009.
  4. World Health Organization, 10 Facts on Global Road Safety, WHO Press, Geneva, Switzerland, 2017,
  5. World Health Organization, Global Status Report on Road Safety 2015, WHO Press, Geneva, Switzerland, 2015,
  6. E. Moradi-Pari, A. Tahmasbi-Sarvestani, and Y. P. Fallah, “A hybrid systems approach to modeling real-time situation-awareness component of networked crash avoidance systems,” IEEE Systems Journal, vol. 10, no. 1, pp. 169–178, 2016. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Islam, M. T. Hossan, T. Nguyen, and Y. M. Jang, “Adaptive spatial-temporal resolution optical vehicular communication system using image sensor,” International Journal of Distributed Sensor Networks, vol. 13, no. 11, 2017. View at Publisher · View at Google Scholar · View at Scopus
  8. E. Abbott and D. Powell, “Land-vehicle navigation using GPS,” Proceedings of the IEEE, vol. 87, no. 1, pp. 145–162, 1999. View at Publisher · View at Google Scholar · View at Scopus
  9. H. Fang, M. Yang, R. Yang, and C. Wang, “Ground-texture-based localization for intelligent vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 3, pp. 463–468, 2009. View at Publisher · View at Google Scholar · View at Scopus
  10. Y. Cui and S. S. Ge, “Autonomous vehicle positioning with GPS in urban canyon environments,” IEEE Transactions on Robotics and Automation, vol. 19, no. 1, pp. 15–25, 2003. View at Publisher · View at Google Scholar · View at Scopus
  11. E. A. Olsen, C.-W. Park, and J. P. How, “3D formation flight using differential carrier-phase GPS sensors,” Journal of the Institute of Navigation, vol. 46, no. 1, pp. 35–48, 1999. View at Publisher · View at Google Scholar · View at Scopus
  12. B. Li, G. J. Sutton, B. Hu, R. P. Liu, and S. Chen, “Modeling and QoS analysis of the IEEE 802.11p broadcast scheme in vehicular ad hoc networks,” Journal of Communications and Networks, vol. 19, no. 2, pp. 169–179, 2017. View at Google Scholar
  13. Y. Yang, D. Fei, and S. Dang, “Inter-vehicle cooperation channel estimation for IEEE 802.11p V2I communications,” Journal of Communications and Networks, vol. 19, no. 3, pp. 227–238, 2017. View at Google Scholar
  14. A. M. S. Abdelgader and W. Lenan, “The physical layer of the IEEE 802.11p WAVE communication standard: the specifications and challenges,” in Proceedings of the World Congress on Engineering and Computer Science (WCECS), San Francisco, CA, USA, October 2014.
  15. S. A. Hussain, M. Iqbal, A. Saeed et al., “An efficient channel access scheme for vehicular ad hoc networks,” Mobile Information Systems, vol. 2017, Article ID 8246050, 10 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  16. E. K. Lee, S. Yang, S. Y. Oh, and M. Gerla, “RF-GPS: RFID assisted localization in VANETs,” in Proceedings of 2009 IEEE 6th International Conference on Mobile Adhoc and Sensor Systems, pp. 621–626, Macau, China, October 2009.
  17. A. M. Ladd, K. E. Bekris, A. P. Rudys, D. S. Wallach, and L. E. Kavraki, “On the feasibility of using wireless Ethernet for indoor localization,” IEEE Transactions on Robotics and Automation, vol. 20, no. 3, pp. 555–559, 2004. View at Publisher · View at Google Scholar · View at Scopus
  18. K. Takagi, K. Morikawa, T. Ogawa, and M. Saburi, “Road environment recognition using on-vehicle LiDAR,” in Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 120–125, Tokyo, Japan, 2006.
  19. P. Lindner and G. Wanielik, “3D LIDAR processing for vehicle safety and environment recognition,” in Proceedings of the IEEE Workshop on Computational Intelligence in Vehicles and Vehicular Systems, pp. 66–71, Nashville, TN, USA, May 2009.
  20. LeddarTech, LiDAR-Enabled Optical Detection Systems, LeddarTech, Quebec City, QC, USA, 2017,
  21. L. Li, Time-of-Flight Camera–An Introduction, Technical White Paper, Texas Instruments, Dallas, TX, USA, 2014,
  22. S. Lanzisera, D. Zats, and K. S. J. Pister, “Radio frequency time-of-flight distance measurement for low-cost wireless sensor localization,” IEEE Sensors Journal, vol. 11, no. 3, pp. 837–845, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. S. May, D. Droeschel, D. Holz et al., “Three-dimensional mapping with time-of-flight cameras,” Journal of Field Robotics, vol. 26, no. 11-12, pp. 935–965, 2009. View at Publisher · View at Google Scholar · View at Scopus
  24. Y. Cui, S. Schuon, D. Chan, S. Thrun, and C. Theobalt, “3D shape scanning with a time-of-flight camera,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1173–1180, San Francisco, CA, USA, June 2010.
  25. T. Schenk and B. Csathó, “Fusion of LIDAR data and aerial imagery for a more complete surface description,” International Archives of Photogrammetry Remote Sensing and Spatial Information Sciences, vol. 34, no. 3, pp. 310–317, 2002. View at Google Scholar
  26. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sensors Journal, vol. 11, no. 9, pp. 1917–1926, 2011. View at Publisher · View at Google Scholar · View at Scopus
  27. P. Fürsattel, S. Placht, M. Balda et al., “A comparative error analysis of current time-of-flight sensors,” IEEE Transactions on Computational Imaging, vol. 2, no. 1, pp. 27–41, 2016. View at Publisher · View at Google Scholar
  28. J. Seiter, M. Hofbauer, M. Davidovic, S. Schidl, and H. Zimmermann, “Correction of the temperature induced error of the illumination source in a time-of-flight distance measurement setup,” in IEEE Sensors Applications Symposium Proceedings, pp. 84–87, Glassboro, NJ, USA, April 2013.
  29. S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” in Proceedings of the 20th International Conference on Pattern Recognition, pp. 3583–3586, Istanbul, Turkey, October 2010.
  30. L. Zeng, D. O’Brien, H. Minh et al., “High data rate multiple input multiple output (MIMO) optical wireless communications using white led lighting,” IEEE Journal on Selected Areas in Communications, vol. 27, no. 9, pp. 1654–1662, 2009. View at Publisher · View at Google Scholar · View at Scopus
  31. M. Ayyash, H. Elgala, A. Khreishah et al., “Coexistence of WiFi and LiFi toward 5G: concepts, opportunities, and challenges,” IEEE Communications Magazine, vol. 54, no. 2, pp. 64–71, 2016. View at Publisher · View at Google Scholar · View at Scopus
  32. M. Uysal, C. Capsoni, Z. Ghassemlooy, A. Boucouvalas, and E. Udvary, Optical Wireless Communications Signals and Communication Technology, Springer, Cham, Switzerland, 2016.
  33. M. Z. Chowdhury, M. T. Hossan, A. Islam, and Y. M. Jang, “A comparative survey of optical wireless technologies: architectures and applications,” IEEE Access, vol. 6, pp. 9819–9840, 2018. View at Publisher · View at Google Scholar
  34. C. Danakis, M. Afgani, G. Povey, I. Underwood, and H. Haas, “Using a CMOS camera sensor for visible light communication,” in Proceedings of the IEEE Globecom Workshops, pp. 1244–1248, Anaheim, CA, USA, Devember 2012.
  35. I. Takai, S. Ito, K. Yasutomi, K. Kagawa, M. Andoh, and S. Kawahito, “LED and CMOS image sensor based optical wireless communication system for automotive applications,” IEEE Photonics Journal, vol. 5, no. 5, p. 6801418, 2013. View at Publisher · View at Google Scholar · View at Scopus
  36. M. Rezaei, M. Terauchi, and R. Klette, “Robust vehicle detection and distance estimation under challenging lighting conditions,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 5, pp. 2723–2743, 2015. View at Publisher · View at Google Scholar · View at Scopus
  37. I. Takai, T. Harada, M. Andoh, K. Yasutomi, K. Kagawa, and S. Kawahito, “Optical vehicle-to-vehicle communication system using LED transmitter and camera receiver,” IEEE Photonics Journal, vol. 6, no. 5, pp. 1–14, 2014. View at Publisher · View at Google Scholar · View at Scopus
  38. T. Yamazato, I. Takai, H. Okada et al., “Image-sensor-based visible light communication for automotive applications,” IEEE Communications Magazine, vol. 52, no. 7, pp. 88–97, 2014. View at Publisher · View at Google Scholar · View at Scopus
  39. Y. Goto, I. Takai, T. Yamazato et al., “A new automotive VLC system using optical communication image sensor,” IEEE Photonics Journal, vol. 8, no. 3, pp. 1–17, 2016. View at Publisher · View at Google Scholar · View at Scopus
  40. C. M. Silva, B. M. Masini, G. Ferrari, and I. Thibault, “A survey on infrastructure-based vehicular networks,” Mobile Information Systems, vol. 2017, Article ID 6123868, 28 pages, 2017. View at Publisher · View at Google Scholar · View at Scopus
  41. T. Nguyen, A. Islam, M. T. Hossan, and Y. M. Jang, “Current status and performance analysis of optical camera communication technologies for 5G networks,” IEEE Access, vol. 5, pp. 4574–4594, 2017. View at Publisher · View at Google Scholar · View at Scopus
  42. N. Wu, L. Feng, and A. Yang, “Localization accuracy improvement of a visible light positioning system based on the linear illumination of LED sources,” IEEE Photonics Journal, vol. 9, no. 5, pp. 1–11, 2017. View at Publisher · View at Google Scholar · View at Scopus
  43. M. T. Hossan, M. Z. Chowdhury, A. Islam, and Y. M. Jang, “A novel indoor mobile localization system based on optical camera communication,” Wireless Communications and Mobile Computing, vol. 2018, Article ID 9353428, 17 pages, 2018. View at Publisher · View at Google Scholar
  44. J. Baqersad, P. Poozesh, C. Niezrecki, and P. Avitabile, “Photogrammetry and optical methods in structural dynamics—A review,” Mechanical Systems and Signal Processing, vol. 86, pp. 17–34, 2017. View at Publisher · View at Google Scholar · View at Scopus
  45. W. Förstner and B. P. Wrobel, Photogrammetric Computer Vision, vol. 11, Springer International Publishing, Cham, Switzerland, 1st edition, 2016.
  46. Z. Li, Z. Wei, C. Wen, and J. Zheng, “Detail-enhanced multi-scale exposure fusion,” IEEE Transactions on Image Processing, vol. 26, no. 3, pp. 1243–1252, 2017. View at Publisher · View at Google Scholar · View at Scopus
  47. Y. Huo and F. Yang, “High-dynamic range image generation from single low-dynamic range image,” IET Image Processing, vol. 10, no. 3, pp. 198–205, 2016. View at Publisher · View at Google Scholar · View at Scopus
  48. T. Nguyen, A. Islam, and Y. M. Jang, “Region-of-interest signaling vehicular system using optical camera communications,” IEEE Photonics Journal, vol. 9, no. 1, pp. 1–20, 2017. View at Publisher · View at Google Scholar · View at Scopus
  49. P. Luo, M. Zhang, Z. Ghassemlooy, and D. Han, “Experimental demonstration of RGB LED-based optical camera communications,” IEEE Photonics Journal, vol. 7, no. 5, pp. 1–12, 2015. View at Google Scholar
  50. Y. Wang and H. Haas, “Dynamic load balancing with handover in hybrid Li-Fi and Wi-Fi networks,” Journal of Lightwave Technology, vol. 33, no. 22, pp. 4671–4682, 2015. View at Publisher · View at Google Scholar · View at Scopus
  51. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, Cambridge, UK, 2nd edition, 2004.