Emerging Small Cell Wireless Technologies for 5G: Architectures and Applications
View this Special IssueResearch Article  Open Access
A Novel Indoor Mobile Localization System Based on Optical Camera Communication
Abstract
Localizing smartphones in indoor environments offers excellent opportunities for ecommence. In this paper, we propose a localization technique for smartphones in indoor environments. This technique can calculate the coordinates of a smartphone using existing illumination infrastructure with lightemitting diodes (LEDs). The system can locate smartphones without further modification of the existing LED light infrastructure. Smartphones do not have fixed position and they may move frequently anywhere in an environment. Our algorithm uses multiple (i.e., more than two) LED lights simultaneously. The smartphone gets the LEDIDs from the LED lights that are within the field of view (FOV) of the smartphone’s camera. These LEDIDs contain the coordinate information (e.g.,  and coordinate) of the LED lights. Concurrently, the pixel area on the image sensor (IS) of projected image changes with the relative motion between the smartphone and each LED light which allows the algorithm to calculate the distance from the smartphone to that LED. At the end of this paper, we present simulated results for predicting the next possible location of the smartphone using Kalman filter to minimize the time delay for coordinate calculation. These simulated results demonstrate that the position resolution can be maintained within 10 cm.
1. Introduction
Nowadays, the number of mobile devices (i.e., smartphone) is dramatically increasing. These devices have an immense scope for commercial applications, for example, services, ecommerce, and ebanking. To move forward with consumerfacing commercial applications, location based services (LBS) for smartphones need to be improved. As the location of the mobile device is unpredictable; a reliable, dynamic, accurate, and situationadaptive localization technique is required for LBS [1]. Moreover, this technique should be secure and interruption free. This LBS approaches will not increase consumer activity without a reliable localization system. The localization scheme design should be considered for indoor and outdoor environments. Indoor localization is most promising for ecommerce applications, as most of them are indoor based. The access density for smartphones is the highest in shopping malls, super markets, and transit stations (i.e., railway, bus, and subway). Indoor environments are hubs for almost all web based business applications. Outdoor localization schemes are also promising and may be improved with insights from indoor localization solution.
Both industry and academic research institutes have recently shown interest in the issue of indoor smartphone localization, and various schemes have been proposed [2, 3]. The most common and widely used indoor localization system relies on the global positioning system (GPS) [4]. Moreover, this system has three particular limitations (e.g., poor GPS signal reception, loss of GPS signal, and limited localization accuracy) [5] especially in indoor environments. This system is not suitable for underground or indoor localization. Since the signal from the satellites to a receiver should be lineofsight (LOS), and building, soil, water, trees, or even poor weather conditions inhibit this signal. Time of flight (TOF) cameras are another possible candidate for solving the localization scheme [6]. Besides its advantages, this system is too expensive and sometimes requires complex scenarios for implementation which makes it inappropriate for unique approach. TOF camera also has some other drawbacks; it is only useful for detection and ranging purposes and does not facilitate the communication purpose [7]. Received signal strength indication (RSSI), time of arrival (TOA), time difference of arrival (TDOA), and angle of arrival (AOA) are physical parameters of radio signal that can be used for localization with special distributed monitors [8, 9]. Other approaches used for indoor localization include computer vision (CV) and artificial intelligence (AI) [10]. None of these approaches are ideal for indoor localization. For example, RSSI depends on environmental conditions. It is affected by shadowing, path loss, signal fading, and interference from neighboring cells [11]. Therefore, errors can be included in the calculated value. On the contrary, narrowband signals, reduced data transmission rate, and lower location precision can be inhibited in TDOA parameter [12]. Other approaches like combining RSSI and AI cannot mitigate the challenges in indoor localization because subjective and objective data from AI have an impact on new input data [13]. On the basis of system performance improvement, it may provide feedback to the input of the system. Concurrently, it will cause a huge impact on new data if input variables are varying fast. Another approach, photogrammetry, measures object location from the photographs. This is a very simple way to generate a map from sequentially taken photographs [14].
To deal with the existing challenges for indoor localization system, optical frequency band in electromagnetic spectrum is a novel candidate that avoids the problems typical with radio frequency (RF). The communication technology where the optical frequency is used is known as optical wireless communication (OWC). A promising subsystem of OWC is optical camera communication (OCC), where multiple LEDs are used as transmitters and a camera or image sensor (IS) is used as receiver [15]. In an indoor environment, the communication channel for OCC is uninterrupted, signaltonoise (SNR) is high, security is ensured by LOS communication, simple signal processing is confirmed, and high speed communication is possible [16]. OCC has multipleinputmultipleoutput (MIMO) functionality [17] which is ideal for analyzing many objects at the same time. Therefore, OCC is a good technique for indoor localization systems. Moreover, OCC is useful not only for localization but also for communication. Since 2011, the IEEE has formed a new working group to finalize the standardization specification (i.e., IEEE 802.15.7) [18]. The development of the standardization specification for OCC will be finalized by mid2018.
In the proposed scheme, smartphones are located through signal from LED light using OCC and photogrammetry. Using the OCC techniques, smartphone cameras receive the LEDID signals from each LED light (or fixture). This LEDID resembles coordinate of the LED lights. The size of the image of the LED fixture on the image sensor of the camera changes with the relative distance between the smartphone and LED fixture. This distance is calculated using photogrammetry which helps to find the remaining coordinate of the smartphone. To optimize the resolution, Kalman filter is applied to accurately predict the next possible locations of the smartphone.
The rest of paper is organized as follows. In Section 2, we survey several literatures on indoor localization system. Our proposed scheme is illuminated in Section 3. In Section 4, the channel modeling and communication are explained. The processes for distance calculation and localization are stated in Section 5. In Section 6, the performance of our proposed system using Kalman filter is evaluated. Finally, we summarize our work and future research direction in Section 7.
2. Related Works
In this section, several indoor localization schemes are summarized. OWC based indoor localization can be classified in three ways: triangulation, fingerprinting, and proximitybased approaches. A geometric position is required in triangulation. Fingerprinting is scene analysis approach and proximitybased approach is grid method. In [19], visionbased positioning and navigation are introduced with the help of a camera and newly defined 3D map in an indoor environment. The authors tried the possible ways to improve the accuracy and system reliability through several experiments. They mainly focused on different applications, for example, mobile robot navigation, transportation, visitor guiding, security, and emergency services. In the same manner, the authors in [20] propose a visionbased system using a camera as the main sensor for object detection, tracking, and localization for distributed sensing, communication, and parallel computing. They used camera nodes as inputs of sensor fusion techniques to reduce computer system encumbrances such as Extended Kalman Filter (EKF) and Maximum Likelihood (ML). They also test investigation for moving objects. The drawback of their proposed system is the lacking of automatic determination of sensor location. For an indoor environment, a visionbased navigation system using augmented reality was proposed in [21]. They recognize a location automatically using image sequences which is taken in the indoor environment and then realized augmented reality by flawlessly superimposing the user’s view with detailed location information. They use wearable mobile PC with a camera for taking image sequences and it transmits the images to remote PCs for future processing. Using their proposed system, the average location recognition success rate was found around 89%. Moreover, its performance will deteriorate with harsh environmental scenarios. In [22], neural network and OCC based indoor positioning system is proposed. They estimate camera position for trained and untrained environments. The error for estimated camera position is less than 10 mm, which can increase up to 200 mm. However, the error depends upon the camera position. In [23], the authors demonstrated a system for positioning and orientation of overlying the location information on camera phones in an indoor environment. They find a location from images with the help of the existing standard hardware. They process less data within very short time to generate accurate data. The limitations of their proposed system in handling the whole environment is that they must improve the feature detection and match while maintaining low latency. Their proposed system is complex and very costly for navigation on a standard camera phone. In [24], the authors propose radiometry and camera geometry for OCC camera model. They consider a camera model for the indoor environment at 50 cm separation for LEDs and the distance between LEDs is 200 cm.
3. Overall Architecture
We propose a localization scheme in which smartphones can be located in an indoor environment with the help of LED lights and the smartphone’s camera. In our proposed scheme, we consider several factors to identify smartphone locations in indoor environments. As in Figure 1, all LED lights fixtures are attached to the ceiling, the distance between celling and floor is constant for particular indoor environment, the camera of smartphone must be under the illumination of the LED light fixture, and there must be at least three LED lights in the field of view (FOV) of the camera, while the smartphone maintains continuous communication with the lighting server. A lighting server provides APIbased access to reproducible, webbased visualizations. The FOV of a camera is the solid angle through which the IS can sense electromagnetic radiation such as visible light. The system performance improves if the number of LED lights within the camera FOV is increased and our algorithm requires at least three light fixtures to deliver accurate location measurements. For our system, the LED lights fixtures coordinates are parallel to the world coordinate system. The vertical distance between ceiling and floor (i.e., coordinate) is equal for all LED lights where the coordinate of light fixture is in the inverse direction to the world coordinates. However, the camera coordinates of the smartphone change frequently with respect to the LED light fixture coordinates.
As shown in Figure 2, each LED light broadcasts its own coordinate information (i.e.,  and coordinate) as a modulated LEDID signal. The coordinate is the same for each and every LED light in a certain indoor environment. Therefore, the term of coordinate in the LEDID is ignored to reduce the complexity and data packet size. These LEDIDs are transmitted as a modulated optical signal. After receiving the signal from the LED lights, smartphones process the data in two different ways. They identify LEDIDs from received signals and measure the distance of the LED light with the corresponding LEDID using photogrammetry. This distance is measured by calculating the size of the light fixture and number of pixels of the corresponding LED light on the IS. The geometric image size of the LED on the IS varies with the distance between the light source and camera. If the LED is located far away from the camera, the size of the image is smaller on the IS and comparatively a large image will generate for the LED light which is near to the camera.
LEDIDs with the corresponding distance calculation are sent to the lighting server by the smartphone via a wireless fidelity (WiFi) access point (AP). The coordinate information for each LED lights is stored in the lighting server. After receiving a signal from the smartphone, the lighting server matches the LEDID signal with its stored location information. Then the algorithm can map the location of the smartphone.
Meanwhile, the location of the smartphone may change during this processing time. Therefore, the lighting server uses Kalman filter tracking algorithm which predicts the next possible location from the current location of the smartphone. This location information is sent to the smartphone. Additionally, the LED light’s projected image on the IS changes as the smartphone moves. Therefore, the placement interval of the LED lights should be constant distance to ensure that at least three images of LED lights are available from any location of the room.
4. Channel Modeling and Communication
4.1. Propagation Model of Light from LEDs
The radiation pattern from LED lights is affected by the roughness of the chip faces and geometry of the encapsulating lens. Several models are used to describe the direction of light strength from a light source (i.e., LED). A popular approach is Monte Carlo ray tracing [25]. In a Gaussian (or cosine) power distribution [26], the ray of light is diffusely reflected or refracted. The final radiation pattern of light should appear linear super which are angularly shifted in function of the angle of incidence of every traced ray due to diffusely reflection or refraction characteristic shown in Figure 3.
The energy flux per solid angle known as luminous intensity and transmitted optical power are the two basic properties of a light source such as LED light. The luminous intensity is given aswhere is the luminous flux and is the center luminous intensity of an LED. This luminous flux can be defined as integration between minimum wavelength, , and maximum wavelength, , of working optical spectrumwhere is the standard luminosity curve and is the maximum spectral efficacy for vision. The integral of the energy flux in all directions is the transmitted optical power , given as
Figure 4(a) shows the LED light propagation direction and Figure 4(b) shows the strength of the radiation from a single light source. The power level is maximum 40 dBm at the center of the light source which is represented as yellow in Figure 4(a) and deteriorates to −80 dBm represented as violet, from the center to the edge of the sphere. Joining average power strengths of light at certain  and coordinate is forming ellipse shape in Figure 4(b). Therefore, weak signal from neighboring light sources causes signal interference. This problem can be mitigated by removing the background noise with a full control on camera shutter speed.
(a)
(b)
4.2. LEDID for Optical Camera Communication
Each LED light has a fixed location and the coordinate information for each single LED light is different from the other LED lights in the same room. The coordinates of the LED light are parallel to the world coordinates. These coordinates are analogues to the LEDID. Every LED light transmits its own ID to the camera which can be declared as a digital tag. For this purpose, the LED acts as a transmitter and camera as a receiver.
The data from the LED light is sent as a modulating signal by varying the intensity of the light using LED driver integrated circuit (IC). This driver controls the light intensity by dimming the LED through a variety of methods. Data from the LED light can be encoded in the phase of LED light signal and phase can be changed by turning the LED light on and off. However, turning the LED light on is not always possible after turning the lights off fully. Therefore, it is recommended to dim the light at minimum intensity. Here, this modulation is known as IM/DD (Intensity Modulated/Direct Detection) modulation [27]. The available modulation technique to transmit signal from LED light sources can be classified as follows:(i)Onoff keying (OOK) [28]: the two logic signals in a digital transmission “1” and “0” are represented as high and zero voltage at the transmitter end. This is achieved by applying flickering illumination of the LED light to represent the onoff state of the transmitter.(ii)Pulse width modulation (PWM) [29]: the modulated signal from the LED light is transmitted in the form of a square wave. The desire level of the pulse is obtained by adjusting the LED light dimming.(iii)Pulseposition modulation (PPM) [30]: light from the LED encoded message is transmitted through a single pulse in onetime shifts.(iv)Orthogonal frequency division multiplexing (OFDM) [31]: data is sent as parallel substreams of modulated data using multiple orthogonal subcarriers in a channel.(v)Frequencyshift keying (FSK): a modulated digital signal can be carried by the instantaneous frequency shifting with a constant amplitude.(vi)Phaseshift keying (PSK) [32]: a digital signal can be carried by the instantaneous phase shifting of the baseband signal.
For high speed data transmission channels, OFDM is used where the possibility of multipath fading and intersymbol interference is high [33]. Despite the enormous advantage afforded by OFDM, we do not implement it, because high speed data transmission is not required for most indoor localization applications. In these cases, OOK is a better choice for data transmission from LEDs. Figure 5 shows how, after encoding and modulating the light, the LED light fixture transmits data to the camera of the smartphone. Our system decodes LED coordinates, after demodulating and decoding the received signal from the image processor.
4.3. Channel Modeling for OCC
The pixel of the model of IS [34] can be calculated aswhere represents the energyperbit, represents the spectralnoisedensity, is the unit pixel value, is the amplitude of the signal, is the camera exposure duration as a ratio of the signal cycle, is a noise term, and , are model fitting parameters for system noise.
Considering the distortion in the channel, the signaltonoiseplusinterference ratio (SNIR) can be determined aswhere is the average pixel intensity transmitted from the LED light, is the average distortion factor whose value lies between 1 and 0: that is, . indicates minimum signal lost where the LED light is focusing directly on the camera lens and indicates that no image pixel is generated on the image sensor of the camera.
With the additive white Gaussian noise (AWGN) characteristic on the camera channel [35], the channel capacity of the space time modulation can be expressed by Shannon capacity formula aswhere is the frame rate of the smartphone camera and is the spatial bandwidth, which represents how much information is carried by the pixels in each image frame. The spatial bandwidth is equivalent to the number of orthogonal or parallel channels in a MIMO system.
The bit error rate (BER) which depends on the SNIR and the modulation scheme measures the impact of the channel. The noise sources that affect the transmission of the light signal from the LEDs are intersymbol interference, background and transmitter LED shot noise, and thermal noise.
Contemporary smartphones use complementary metal oxide semiconductor (CMOS) based IS where shutter type is rolling shutter. With this shutter technique, light intensities on the IS are captured row by row and the whole image is composed of different pixel array. Therefore, the exposed time delay between pixel array lines records the changing state of illumination of the LED light as a group of pixels in one image. The optical channel DC gain models the channel characteristic from LED lights to the camera which can be determined [36] aswhere is the order of Lambertian emission, is the area on IS, is the distance between an LED and IS, is the optical filter coefficient for signal transmission, is the angle of incidence, and is camera FOV semiangle. Here, can be defined as
It is required to consider the channel noise (which is independent of signal characteristic) and the number of LEDID signal to calculate the channel output as follows: where and is the camera, responsivity.
The average optical power of the IS of the camera can be calculated as
5. Distance Calculation and Localization
5.1. Distance Calculation between LED Light and Camera
The fundamental operation of a camera is diagrammed in Figure 6 where an image of a target LED light is projected on the IS of a camera. Light from the target LED passes through the camera lens and is concentrated on the IS plate. The projected image on the IS plate is an inverted image of LED light fixture.
Consider that is the focal length of the camera, is the distance from the camera lens to the target LED light, and is the distance from the focal length to the projected image on the IS. Therefore, we can write
The magnification of the lens is the ratio of the projected image size to the geometric size of LED light. If the camera projects a square image on the IS where the height and width of the target LED and the projected image are and , respectively, then the lens magnification can be expressed as
If we ignore the loss in the optical channel, then . More precisely, . By combining (11) with (12), we get
The number of pixels of image is the ratio of the projected image size on the IS to the unit pixel area of the same sensor. If the number of pixels is on the IS, is the unit pixel area of IS, and is the area of the target LED light source, then we obtain
Different shapes of light fixture are available in the market. However, rectangular/square and circle are typical light fixture shapes. For circular shape LED light fixture, if is the radius of the circle, then the area of the light source will become . On the contrary, if and are the width and length of rectangular light sources, respectively, then the area of the light source will be .
If we only consider the LED lights within the smartphone FOV, the distances from the LED light to the smartphone camera are different for each and every light. The projected image from LED light facing the camera straighton is larger than the image of an LED light located at an angle to the same camera. These distances can be determined by (14) which must be modified due to the relative motion between the camera and LED light fixtures. For each camera, the focal length is , and the unite pixel area of IS is fixed. On the contrary, if we know the realphysical area of the lighting fixture, then we can write (14) aswhere is constant for each camera and LED light.
Figure 7 shows how the image areas are varying with the distance of the LED lights from the camera. From (15), the distance from IS to the LED light fixture is inversely proportional to the square root of the image area on the IS.
5.2. Scenario of Image Area on the IS for Dynamic of Camera
The smartphone is the only moving device with respect to the LED lights in indoor environments. The FOV of the camera shifts with changes in the smartphone’s location. Therefore, the images of LED lights on the IS of camera also change. Additionally, the lighting infrastructure must be designed in a way that keeps at least three LED lights within the FOV of each camera. Figure 8 describes a scenario in which the LED lights detected by the camera vary from 4 to 5 due to the movement of the smartphone from location 1 to location 5.
Square blocks, black dots, and circles represent LED lights, camera, and the FOV of the camera, respectively. For location 1 of the smartphone, five LED lights images are projected on IS. The number of projected images varies from 5 to 4 when the location of the smartphone changes from location 1 to location 5. If more than three LED lights images are projected on the IS, location of the smartphone can be calculated more accurately.
Next, we consider another scenario in which the smartphone moves from location 1 to location 2, as shown in Figure 9(b). Due to the change of smartphone location, the image of the LED lights on the IS also changes. We consider three possible location of LED lights (e.g., red, green, and blue), all attached to the celling. For location 1 of the smartphone in Figure 9(c), the blue LED light is relatively close to the camera. This means that the angle is smaller from camera to the blue LED light. This angle is large for the red LED light compared with the blue LED light. On the contrary, this angel is nearly zero for the green LED light. These distances are calculated using (15).
(a)
(b)
(c)
We describe the straightline distances of red, green, and blue LED lights from lens of the camera which are , , and , respectively. Furthermore, , , and are the number of pixels on the IS for red, green, and blue light fixtures, respectively, at location 1. Therefore, (15) can be written for each LED light as
From Figure 9(c), the size of image on the IS is larger for the green LED light. This size gradually decreases from the blue to red LED lights. Therefore, comparing one particular image with the other image on IS can be mathematically represented as . As we know from (15), the number of pixels projected on the IS for a certain object depends only on the distance between the camera and the object when other factors remain constant. The mathematical expression for each and every image on the IS and (16)–(18) gives a conclusion: .
At location 2 in Figure 9(a), the angular distance between the camera and the blue LED light fixture increases compared with the red LED light. Therefore, the image size increases accordingly. In contrast, the image size of the green LED light is almost identical due to the small shift in angle.
For location 2, the distance from LEDs lights to the camera lens are notated as , , and for red, green, and blue LEDs, respectively. Moreover, the numbers of pixels on the IS are labeled as , , and . Therefore, if we apply (15) for three LED light images, we will get
From Figure 9(a), the general expression for the number of image pixels is on the IS. From (19)–(21), the distance can be explained as .
5.3. Uploading Information to the Lighting Server
The smartphone begins to send distance information with the corresponding LEDID to the lighting server via WiFi AP. This information is sent as a packet with two slots where the first slot has the coordinate information of the LED light and second slot has its distance information. The information of LED light coordinates is already stored in the lighting server. After receiving a signal from the smartphone, the lighting server generates a virtual map of LED lights from extracting information from the packet. With the mathematical model of trilateration (or multilateration for more than three LED lights), the lighting server calculates the location of the smartphone.
5.4. Computing Smartphone Location
The beam of LED light propagates from ceiling to floor and illumination spreads 360° from the center of a light source. At a specific distance from the LED light, the intensity of light illumination is equal. If the smartphone is located at any of these locations, the projected image area on the IS will be found the same with the other projected images around that circle. Therefore, the chance of false location identification is higher for a single LED light source. Figure 10(a) shows the original location of the camera with the probable false location of the camera (in a faded image). This false location information creates errors during location mapping in the lighting server.
(a)
(b)
(c)
Consequently, to mitigate locationestimation errors when only a single LED light is visible, we consider another LED light as a reference for the first LED light. From Figure 10(b), we narrow down the location to the torus formed by the intersection of the two spheres. Though more LED lights are available in this scenario, the duplication still occurs. Hence, a third LED light is necessary to deliver accurate location measurements.
With a third LED light, we can narrow the location of the smartphone to one possible location. In Figure 10(c), three circles centered on each of the landmarks overlap at three different locations where the radius of each circle is equidistant from each landmark. Therefore, two other locations of the smartphone along with the original are still possible even with three landmarks. Moreover, other location information may not arise any confusion for smartphone position estimation. It is possible to estimate the smartphone’s location accurately by comparing information from two LED lights with information from a third light.
The method to determine the location of smartphone using three fixed reference points (or LED lights) is known as trilateration [37] or more than three points (which is known as multilateration). For trilateration, the measuring platform is simultaneous with three relevant nonlinear equations. The reference LED lights can be situated either in a triangle or in a straight line from each other.
If is the coordinates of any LED light under the celling, where , and the coordinate information of the smartphone camera is , then the distance from the camera to the LED light can be represented with the following equation:
A matrix can be generated with the above equations for asThe matrix equation can be replaced as
Two different cases can occur when solving the trilateration problem for locating the smartphone. The LED lights can be distributed randomly as in Figure 11(a) or aligned in a straight line as shown in Figure 11(b).
(a)
(b)
To identify the location of a smartphone from the reference LED lights located at the vertexes of a triangle, the general solution of (24) can be expressed aswhere is denoted as the particular solution and is the real parameter. If is a homogeneous system, then is its solution.
The matrix is written as pseudoinverse matrix format to determine the solution for . On the other hand, the value of can be evaluated using the expression of , , and .
The following solution can be generated after solving (25) as
To identify the location of a smartphone from reference LED lights located in a straight line, the general solution of (24) is expressed aswhere homogeneous system ; and are two solutions with real parameters .
The mathematical expression of (24) is different for the case with more than three LED lights. The solution can be found by solving multilateration problem. The relevant equation can be expressed as follows:
On the base of the least squares methods, the solution of (29) can be found as
In Figure 12, three LED lights are located at three points (i.e., , , and ) in a twodimensional plane. Their illumination spheres intersect at two points (i.e., and ) that are possible locations for the smartphone camera. The lighting server chooses between the multiple possible location of the smartphone using trilateration.
Figure 13 shows that systematic estimation error is generated when the lighting server estimates the position of the smartphone. The error is minimum for the horizontal bias (in Figure 13(a)) of the indoor environment and is much higher for the vertical bias (in Figure 13(b)). Therefore, system performance is much degraded when measuring vertical position.
(a)
(b)
Around a cluster of three LED lights, the possible locations of the smartphone are shown in Figure 15. Dotted lines represent the optical links between the camera and the LED lights and solid lines represent the fixed distances between the LED lights. In almost all cases, the distance from the smartphone to each LED light is different. In some cases (Figures 14(b), 14(c), 14(d), and 14(g)), the distance to two LED lights is equal compared with the distance from the other LED light. Additionally, there are few cases (Figures 14(e), 14(f), and 14(h)) where all three distances are different from each other. Concurrently, there is only one case (Figure 14(a)) where camera is equidistant from all LED lights. The algorithm can locate smartphones at these locations without error.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 15 shows an example of the final stage of a serverside process for estimating the smartphone coordinates. Three LED lights are imaged within the FOV of the camera. The distance between each LED light is equal and in our tests this value is 150 cm. The LED light coordinates are , , and , all in cm. Here, the coordinates are the same for these three LED lights but the coordinates are all different. We chose these coordinates to simplify this example. The coordinate is equal for all the LED lights, so we ignore it in our calculations.
Let us consider a smartphone placed between and and far away from . More precisely, this smartphone is closer to than . The distances from the camera to , , and are 320 cm, 336.05 cm, 410.37 cm, respectively, which are measured by calculating the image sizes on the image sensor. The relative distances from these three LED lights show that the smartphone coordinate is 40 cm away from and is 110 cm away from . In this example, we consider that is located at the origin and the coordinate of the smartphone is calculated with respect to it. Here, 40 cm is the coordinate of the smartphone. The coordinate of the smartphone camera can be measured with the Pythagorean theorem. Therefore, the coordinate of smartphone can be calculated as 317.5. Finally, the estimated coordinate is .
5.5. Estimate the Next Location of the Smartphone
After finalizing the smartphone coordinates, the lighting server sends coordinate information to the smartphone, updates this information, and stores the information for future use. The location of the smartphone is always changing. While the server estimates the smartphone position, the smartphone may have moved. It is required to run another serverside algorithm in parallel to estimate the velocity, acceleration, and next possible position of the smartphone. We use a Kalman filter to track the next position of the smartphone. This filter depends on the present input measurement instead of previous information (e.g., velocity and acceleration) from the smartphone [38].
Kalman filter is a recursive estimator and linear filter mostly used to approximate errors in navigation applications with minimum variance estimate in a least squares sense under noise processes. Kalman filter gain, current estimation, and new error in the estimation are three important calculations in Figure 16. The Kalman filter gain places the special importance on the error in the estimate and the error in the measurement. On the other hand, the current estimation depends on the previous estimation and the present measured value. The relative importance between previous estimate and present measured value is also fixed by the Kalman filter gain. Furthermore, Kalman filter gain and current estimation are needed to know the new error in the estimate which is passed onto error in future estimation. The preliminary estimated location of the smartphone can be described aswhere is the initial location of the smartphone, is the state (or adaptation) matrix, and is noise added to the initial location.
The measurements and state vectors are weighted by their respective processes’ covariance matrices. The process covariance matrix (or error in the position estimation) can be represented aswhere the initial process covariance matrix is and is noise.
The filter deweights the measured value during large variance and low gain in comparison to the state estimate. This situation leads the filter to prioritize the prediction state rather than measurements. In different circumstances, the measured value is weighted more over the predicted value due to the small variance and high gain. The gain of the Kalman filter is known as the Kalman gain which depends on the error in the estimate and error in the measurement. Kalman gain, , is the ratio of the error in estimate to the total error in both the estimate and measurement,where is the observation or measured error and is transformation matrix, which converts the covariance matrix into Kalman filter gain matrix. The value of Kalman gain lies between 0 and 1 (i.e., ). If is near to 1, it means error in the measurement is nearly 0. In this estimation, the estimates are unstable (large error in the estimate) and measurements are accurate.
The error in location estimate will decrease when the value of is close to 0. Therefore, the difference between estimation and actual is narrowed down. The expression for current estimation can be written aswhere is the present estimate, is the previous estimate, and is the measured smartphone coordinate.
Similarly, if the Kalman filter gain is large, then the present error in the estimate is small. The new predicted state can be defined as follows:
5.6. Postponed Signal Propagation
The smartphone location measurement and signal propagation stops if the smartphone user leaves the room. To recognize this situation, the lighting server broadcasts a message several times and waits for a reply. In the event of no reply, the server stops sending data and stores the position information.
6. Simulation Result
To evaluate the performance of our proposed scheme, we used a smartphone in 1600 sqft. indoor environment. The test instrument specifications are provided in Table 1. The simulation result will vary with the variation of the camera and luminaire parameters.

Figure 17 shows a graphical representation between BER versus SNIR for theoretical and simulation results. Both curves are almost merged together because we ignore the effect of channel noise in our simulation. It explains that BER is increased with the decrease for SNIR of OOK signaling.
In Figure 18, the simulation result shows that initially the location of the smartphone is not identified and is mentioned as unassigned. Estimation accuracy was not good enough for the 1st estimation compared with the 4th estimation. This estimation process is improved sequentially after a few steps. Concurrently, the distance between each estimation is kept between 9 and 10 cm.
The possibility of changing coordinates in the direction is negligible. Therefore, we only have to calculate the  and coordinates of the smartphone. In Figure 19, the green line shows the mean value of the smartphone location and red line is the estimated value. The location estimation using Kalman filter for axis is plotted in Figure 19(a) and axis of smartphone is plotted in Figure 19(b). Figure 19 states that there is a deviation of location estimation from the mean value of the location. We consider 1 Hz sampling rate and run time is 50 sec. Overall, 50 samples were considered for simulation.
(a)
(b)
Distance measurement using OCC depends on the size of the projected image of the LEDs on the IS. With increasing the distance, the projected image on the IS occupies less area rather than a shorter distance. Therefore, the possibility of smartphone localization is shrunk if the vertical distance between the smartphone and the LED lights at the ceiling is increased. In Figure 20, when the vertical distance from the camera to ceiling is remaining within 10 m, the image occupied area is greater than or equal to 4pixel area. After 10 to 35 m, localization possibility is reduced due to decreasing pixel occupied area . Theoretically, it is required to occupy at least unit pixel area of an image sensor. However, it is difficult to ensure that the projected image merges with edgebyedge of a pixel. Therefore, after 35 m, localization possibility is zero because pixel occupied area remained . We consider a fixed transmitter size and in that case, its image occupies less than unit pixel area after 35 m. If we change the transmitter size, then the distance measurement performance will be changed.
Localization estimation error occurs due to change of smartphone position frequency. We test our algorithm both with Kalman filter and without Kalman filter. Then we generate a plot for comparing the significance between them in Figure 21. A significant deviation on performance was found from the figure. At the initial stage of measurement, both show the same percentage of errors in estimation, whereas in both cases, estimation errors are exponentially decreasing with simulation runtime. At 10 sec period, estimation error is near zero for Kalman filter based estimation. Meanwhile, at the same time, another estimation (i.e., without Kalman filter) shows 50% error in estimation.
7. Conclusion
In this paper, we proposed a smartphone localization system for an indoor environment. Using OCC for smartphone localization is a novel idea. We also use photogrammetry technique along with OCC. The localization resolution for the smartphone is kept within 10 cm. The proposed system relies upon a central processing lighting server for positioning calculations. Signaling from LED light fixtures and localization of smartphone are kept within certain indoor environment. Therefore, this localization scheme is more secure. Additionally, chance of error in the position estimation is more for the system where the implication of Kalman filter is ignored. We included Kalman filter to track the next possible location of the smartphone. Thus, the proposed scheme is more accurate than the existing localization scheme. These lighting fixtures are useful not only for localization but also for illumination for the user. In future work, we will test and evaluate the performance in different environmental scenarios (e.g., escalator and staircase). We will consider variation in height between smartphone and ceiling light fixtures. Meanwhile, we are also trying to optimize the position identification resolution without using Kalman filter to make the system simpler.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
Acknowledgments
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Global IT Talent Support Program (IITP2017001806) supervised by the IITP (Institute for Information and Communication Technology Promotion).
References
 A. Yassin, Y. Nasser, M. Awad et al., “Recent advances in indoor localization: a survey on theoretical approaches and applications,” IEEE Communications Surveys & Tutorials, vol. 19, no. 2, pp. 1327–1346, 2017. View at: Publisher Site  Google Scholar
 K. A. Nuaimi and H. Kamel, “A survey of indoor positioning systems and algorithms,” in Proceedings of the International Conference on Innovations in Information Technology (IIT '11), pp. 185–190, Abu Dhabi, UAE, June 2011. View at: Publisher Site  Google Scholar
 G. Kul, T. Özyer, and B. Tavli, “IEEE 802.11 WLAN based real time indoor positioning: literature survey and experimental investigations,” Procedia Computer Science, vol. 34, pp. 157–164, 2014. View at: Google Scholar
 B. H. Wellenhof, H. Lichtenegger, and J. Collins, Global Positioning System: Theory and Practice, Springer, Wien, Austria, 5th edition, 2001. View at: Publisher Site
 A. Kleusberg and R. B. Langley, “The limitations of GPS,” GPS World, vol. 1, no. 2, 1990. View at: Google Scholar
 P. Zanuttigh, G. Marin, C. Dal Mutto, F. Dominio, L. Minto, and G. M. Cortelazzo, TimeofFlight and Structured Light Depth Cameras: Technology and Applications, Springer International Publishing, 1st edition, 2016.
 S. Foix, G. Alenya, and C. Torras, “Lockin timeofflight (ToF) cameras: a survey,” IEEE Sensors Journal, vol. 11, no. 9, pp. 1917–1926, 2011. View at: Publisher Site  Google Scholar
 H. Liu, H. Darabi, P. Banerjee, and J. Liu, “Survey of wireless indoor positioning techniques and systems,” IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 37, no. 6, pp. 1067–1080, 2007. View at: Publisher Site  Google Scholar
 Z. Farid, R. Nordin, and M. Ismail, “Recent advances in wireless indoor localization techniques and system,” Journal of Computer Networks and Communications, vol. 2013, Article ID 185138, 12 pages, 2013. View at: Publisher Site  Google Scholar
 S.H. Jung, B.C. Moon, and D. Han, “Unsupervised learning for crowdsourced indoor localization in wireless networks,” IEEE Transactions on Mobile Computing, vol. 15, no. 11, pp. 2892–2906, 2016. View at: Publisher Site  Google Scholar
 K. Heurtefeux and F. Valois, “Is RSSI a good choice for localization in wireless sensor network?” in Proceedings of the 26th IEEE International Conference on Advanced Information Networking and Applications (AINA '12), pp. 732–739, Fukuoka, Japan, March 2012. View at: Publisher Site  Google Scholar
 F. Gustafsson and F. Gunnarsson, “Mobile positioning using wireless networks: possibilities and fundamental limitations based on available wireless network measurements,” IEEE Signal Processing Magazine, vol. 22, no. 4, pp. 41–53, 2005. View at: Publisher Site  Google Scholar
 Q. Liu, S. Levinson, Y. Wu, and T. Huang, “Interactive and incremental learning via a mixture of supervised and unsupervised learning strategies,” in Proceedings of the 5th Joint Conference on Information Sciences (JCIS '00), vol. 1, pp. 555–558, March 2000. View at: Google Scholar
 K. Kraus, Photogrammetry: Geometry from Images and Laser Scans, vol. 1, Walter de Gruyter, 2nd edition, 2007.
 M. Uysal, C. Capsoni, Z. Ghassemlooy, A. Boucouvalas, and E. Udvary, Optical Wireless Communications: An Emerging Technology, Springer International Publishing, Cham, Switzerland, 2016.
 N. Saha, M. S. Ifthekhar, N. T. Le, and Y. M. Jang, “Survey on optical camera communications: challenges and opportunities,” IET Optoelectronics, vol. 9, no. 5, pp. 172–183, 2015. View at: Publisher Site  Google Scholar
 P. Luo, M. Zhang, Z. Ghassemlooy et al., “Experimental demonstration of RGB LEDbased optical camera communications,” IEEE Photonics Journal, vol. 7, no. 5, pp. 1–12, 2015. View at: Google Scholar
 A. C. Boucouvalas, P. Chatzimisios, Z. Ghassemlooy, M. Uysal, and K. Yiannopoulos, “Standards for indoor optical wireless communications,” IEEE Communications Magazine, vol. 53, no. 3, pp. 24–31, 2015. View at: Publisher Site  Google Scholar
 X. Li, J. Wang, A. Olesk, N. Knight, and W. Ding, “Indoor positioning within a single camera and 3D maps,” in Proceedings of the Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS '10), pp. 1–9, Kirkkonummi, Finland, October 2010. View at: Publisher Site  Google Scholar
 J. M. S. Matamoros, J. R. M. Dios, and A. Ollero, “Cooperative localization and tracking with a camerabased WSN,” in Proceedings of the IEEE International Conference on Mechatronics (ICM '09), pp. 1–6, Malaga, Spain, April 2009. View at: Publisher Site  Google Scholar
 J. Kim and H. Jun, “Visionbased location positioning using augmented reality for indoor navigation,” IEEE Transactions on Consumer Electronics, vol. 54, no. 3, pp. 954–962, 2008. View at: Publisher Site  Google Scholar
 M. S. Ifthekhar, N. Saha, and Y. M. Jang, “Neural network based indoor positioning technique in optical camera communication system,” in Proceedings of the 5th International Conference on Indoor Positioning and Indoor Navigation (IPIN '14), pp. 431–435, Busan, South Korea, October 2014. View at: Publisher Site  Google Scholar
 H. Hile and G. Borriello, “Positioning and orientation in indoor environments using camera phones,” IEEE Computer Graphics and Applications, vol. 28, no. 4, pp. 32–39, 2008. View at: Publisher Site  Google Scholar
 M. S. Ifthekhar, M. A. Hossain, C. H. Hong, and Y. M. Jang, “Radiometric and geometric camera model for optical camera communications,” in Proceedings of the 7th International Conference on Ubiquitous and Future Networks (ICUFN '15), pp. 53–57, Sapporo, Japan, July 2015. View at: Publisher Site  Google Scholar
 G. J. Ward, F. M. Rubinstein, and R. D. Clear, “A ray tracing solution for diffuse interreflection,” in Proceedings of the 15th International Conference on Computer Graphics and Interactive Techniques, pp. 85–92, August 1988. View at: Google Scholar
 I. Moreno and C.C. Sun, “Modeling the radiation pattern of LEDs,” Optics Express, vol. 16, no. 3, pp. 1808–1819, 2008. View at: Publisher Site  Google Scholar
 S. Hranilovic and F. R. Kschischang, “Optical intensitymodulated direct detection channels: signal space and lattice codes,” IEEE Transactions on Information Theory, vol. 49, no. 6, pp. 1385–1399, 2003. View at: Publisher Site  Google Scholar  MathSciNet
 S. Rajagopal, R. D. Roberts, and S.K. Lim, “IEEE 802.15.7 visible light communication: modulation schemes and dimming support,” IEEE Communications Magazine, vol. 50, no. 3, pp. 72–82, 2012. View at: Publisher Site  Google Scholar
 F. Vasca and L. Iannelli, Dynamics and Control of Switched Electronic Systems, Springer, London, UK, 1st edition, 2012.
 E. D. J. Smith, R. J. Blaikie, and D. P. Taylor, “Performance enhancement of spectralamplitudecoding optical CDMA using pulseposition modulation,” IEEE Transactions on Communications, vol. 46, no. 9, pp. 1176–1185, 1998. View at: Publisher Site  Google Scholar
 J. Armstrong, “OFDM for optical communications,” Journal of Lightwave Technology, vol. 27, no. 3, pp. 189–204, 2009. View at: Publisher Site  Google Scholar
 X. Song, F. Yang, and J. Cheng, “Subcarrier intensity modulated optical wireless communications in atmospheric turbulence with pointing errors,” IEEE/OSA Journal of Optical Communications and Networking, vol. 5, no. 4, pp. 349–358, 2013. View at: Publisher Site  Google Scholar
 J. Armstrong, “Analysis of new and existing methods of reducing intercarrier interference due to carrier frequency offset in OFDM,” IEEE Transactions on Communications, vol. 47, no. 3, pp. 365–369, 1999. View at: Publisher Site  Google Scholar
 T. Nguyen, A. Islam, T. Hossan, and Y. M. Jang, “Current status and performance analysis of optical camera communication technologies for 5G networks,” IEEE Access, vol. 5, pp. 4574–4594, 2017. View at: Publisher Site  Google Scholar
 I. Djordjevic, W. Ryan, and B. Vasic, Coding for Optical Channels, Springer, New York, NY, USA, 2010.
 J. M. Kahn and J. R. Barry, “Wireless infrared communications,” Proceedings of the IEEE, vol. 85, no. 2, pp. 265–298, 1997. View at: Publisher Site  Google Scholar
 F. Thomas and L. Ros, “Revisiting trilateration for robot localization,” IEEE Transactions on Robotics, vol. 21, no. 1, pp. 93–101, 2005. View at: Publisher Site  Google Scholar
 B. Ristic, S. Arulampalam, and N. Gordon, Beyond the Kalman Filter: Particle Filters for Tracking Applications, Artech House, 2003.
Copyright
Copyright © 2018 Md. Tanvir Hossan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.