Abstract

Localizing smartphones in indoor environments offers excellent opportunities for e-commence. In this paper, we propose a localization technique for smartphones in indoor environments. This technique can calculate the coordinates of a smartphone using existing illumination infrastructure with light-emitting diodes (LEDs). The system can locate smartphones without further modification of the existing LED light infrastructure. Smartphones do not have fixed position and they may move frequently anywhere in an environment. Our algorithm uses multiple (i.e., more than two) LED lights simultaneously. The smartphone gets the LED-IDs from the LED lights that are within the field of view (FOV) of the smartphone’s camera. These LED-IDs contain the coordinate information (e.g., - and -coordinate) of the LED lights. Concurrently, the pixel area on the image sensor (IS) of projected image changes with the relative motion between the smartphone and each LED light which allows the algorithm to calculate the distance from the smartphone to that LED. At the end of this paper, we present simulated results for predicting the next possible location of the smartphone using Kalman filter to minimize the time delay for coordinate calculation. These simulated results demonstrate that the position resolution can be maintained within 10 cm.

1. Introduction

Nowadays, the number of mobile devices (i.e., smartphone) is dramatically increasing. These devices have an immense scope for commercial applications, for example, services, e-commerce, and e-banking. To move forward with consumer-facing commercial applications, location based services (LBS) for smartphones need to be improved. As the location of the mobile device is unpredictable; a reliable, dynamic, accurate, and situation-adaptive localization technique is required for LBS [1]. Moreover, this technique should be secure and interruption free. This LBS approaches will not increase consumer activity without a reliable localization system. The localization scheme design should be considered for indoor and outdoor environments. Indoor localization is most promising for e-commerce applications, as most of them are indoor based. The access density for smartphones is the highest in shopping malls, super markets, and transit stations (i.e., railway, bus, and subway). Indoor environments are hubs for almost all web based business applications. Outdoor localization schemes are also promising and may be improved with insights from indoor localization solution.

Both industry and academic research institutes have recently shown interest in the issue of indoor smartphone localization, and various schemes have been proposed [2, 3]. The most common and widely used indoor localization system relies on the global positioning system (GPS) [4]. Moreover, this system has three particular limitations (e.g., poor GPS signal reception, loss of GPS signal, and limited localization accuracy) [5] especially in indoor environments. This system is not suitable for underground or indoor localization. Since the signal from the satellites to a receiver should be line-of-sight (LOS), and building, soil, water, trees, or even poor weather conditions inhibit this signal. Time of flight (TOF) cameras are another possible candidate for solving the localization scheme [6]. Besides its advantages, this system is too expensive and sometimes requires complex scenarios for implementation which makes it inappropriate for unique approach. TOF camera also has some other drawbacks; it is only useful for detection and ranging purposes and does not facilitate the communication purpose [7]. Received signal strength indication (RSSI), time of arrival (TOA), time difference of arrival (TDOA), and angle of arrival (AOA) are physical parameters of radio signal that can be used for localization with special distributed monitors [8, 9]. Other approaches used for indoor localization include computer vision (CV) and artificial intelligence (AI) [10]. None of these approaches are ideal for indoor localization. For example, RSSI depends on environmental conditions. It is affected by shadowing, path loss, signal fading, and interference from neighboring cells [11]. Therefore, errors can be included in the calculated value. On the contrary, narrowband signals, reduced data transmission rate, and lower location precision can be inhibited in TDOA parameter [12]. Other approaches like combining RSSI and AI cannot mitigate the challenges in indoor localization because subjective and objective data from AI have an impact on new input data [13]. On the basis of system performance improvement, it may provide feedback to the input of the system. Concurrently, it will cause a huge impact on new data if input variables are varying fast. Another approach, photogrammetry, measures object location from the photographs. This is a very simple way to generate a map from sequentially taken photographs [14].

To deal with the existing challenges for indoor localization system, optical frequency band in electromagnetic spectrum is a novel candidate that avoids the problems typical with radio frequency (RF). The communication technology where the optical frequency is used is known as optical wireless communication (OWC). A promising subsystem of OWC is optical camera communication (OCC), where multiple LEDs are used as transmitters and a camera or image sensor (IS) is used as receiver [15]. In an indoor environment, the communication channel for OCC is uninterrupted, signal-to-noise (SNR) is high, security is ensured by LOS communication, simple signal processing is confirmed, and high speed communication is possible [16]. OCC has multiple-input-multiple-output (MIMO) functionality [17] which is ideal for analyzing many objects at the same time. Therefore, OCC is a good technique for indoor localization systems. Moreover, OCC is useful not only for localization but also for communication. Since 2011, the IEEE has formed a new working group to finalize the standardization specification (i.e., IEEE 802.15.7) [18]. The development of the standardization specification for OCC will be finalized by mid-2018.

In the proposed scheme, smartphones are located through signal from LED light using OCC and photogrammetry. Using the OCC techniques, smartphone cameras receive the LED-ID signals from each LED light (or fixture). This LED-ID resembles coordinate of the LED lights. The size of the image of the LED fixture on the image sensor of the camera changes with the relative distance between the smartphone and LED fixture. This distance is calculated using photogrammetry which helps to find the remaining coordinate of the smartphone. To optimize the resolution, Kalman filter is applied to accurately predict the next possible locations of the smartphone.

The rest of paper is organized as follows. In Section 2, we survey several literatures on indoor localization system. Our proposed scheme is illuminated in Section 3. In Section 4, the channel modeling and communication are explained. The processes for distance calculation and localization are stated in Section 5. In Section 6, the performance of our proposed system using Kalman filter is evaluated. Finally, we summarize our work and future research direction in Section 7.

In this section, several indoor localization schemes are summarized. OWC based indoor localization can be classified in three ways: triangulation, fingerprinting, and proximity-based approaches. A geometric position is required in triangulation. Fingerprinting is scene analysis approach and proximity-based approach is grid method. In [19], vision-based positioning and navigation are introduced with the help of a camera and newly defined 3D map in an indoor environment. The authors tried the possible ways to improve the accuracy and system reliability through several experiments. They mainly focused on different applications, for example, mobile robot navigation, transportation, visitor guiding, security, and emergency services. In the same manner, the authors in [20] propose a vision-based system using a camera as the main sensor for object detection, tracking, and localization for distributed sensing, communication, and parallel computing. They used camera nodes as inputs of sensor fusion techniques to reduce computer system encumbrances such as Extended Kalman Filter (EKF) and Maximum Likelihood (ML). They also test investigation for moving objects. The drawback of their proposed system is the lacking of automatic determination of sensor location. For an indoor environment, a vision-based navigation system using augmented reality was proposed in [21]. They recognize a location automatically using image sequences which is taken in the indoor environment and then realized augmented reality by flawlessly superimposing the user’s view with detailed location information. They use wearable mobile PC with a camera for taking image sequences and it transmits the images to remote PCs for future processing. Using their proposed system, the average location recognition success rate was found around 89%. Moreover, its performance will deteriorate with harsh environmental scenarios. In [22], neural network and OCC based indoor positioning system is proposed. They estimate camera position for trained and untrained environments. The error for estimated camera position is less than 10 mm, which can increase up to 200 mm. However, the error depends upon the camera position. In [23], the authors demonstrated a system for positioning and orientation of overlying the location information on camera phones in an indoor environment. They find a location from images with the help of the existing standard hardware. They process less data within very short time to generate accurate data. The limitations of their proposed system in handling the whole environment is that they must improve the feature detection and match while maintaining low latency. Their proposed system is complex and very costly for navigation on a standard camera phone. In [24], the authors propose radiometry and camera geometry for OCC camera model. They consider a camera model for the indoor environment at 50 cm separation for LEDs and the distance between LEDs is 200 cm.

3. Overall Architecture

We propose a localization scheme in which smartphones can be located in an indoor environment with the help of LED lights and the smartphone’s camera. In our proposed scheme, we consider several factors to identify smartphone locations in indoor environments. As in Figure 1, all LED lights fixtures are attached to the ceiling, the distance between celling and floor is constant for particular indoor environment, the camera of smartphone must be under the illumination of the LED light fixture, and there must be at least three LED lights in the field of view (FOV) of the camera, while the smartphone maintains continuous communication with the lighting server. A lighting server provides API-based access to reproducible, web-based visualizations. The FOV of a camera is the solid angle through which the IS can sense electromagnetic radiation such as visible light. The system performance improves if the number of LED lights within the camera FOV is increased and our algorithm requires at least three light fixtures to deliver accurate location measurements. For our system, the LED lights fixtures coordinates are parallel to the world coordinate system. The vertical distance between ceiling and floor (i.e., -coordinate) is equal for all LED lights where the -coordinate of light fixture is in the inverse direction to the world -coordinates. However, the camera coordinates of the smartphone change frequently with respect to the LED light fixture coordinates.

As shown in Figure 2, each LED light broadcasts its own coordinate information (i.e., - and -coordinate) as a modulated LED-ID signal. The -coordinate is the same for each and every LED light in a certain indoor environment. Therefore, the term of -coordinate in the LED-ID is ignored to reduce the complexity and data packet size. These LED-IDs are transmitted as a modulated optical signal. After receiving the signal from the LED lights, smartphones process the data in two different ways. They identify LED-IDs from received signals and measure the distance of the LED light with the corresponding LED-ID using photogrammetry. This distance is measured by calculating the size of the light fixture and number of pixels of the corresponding LED light on the IS. The geometric image size of the LED on the IS varies with the distance between the light source and camera. If the LED is located far away from the camera, the size of the image is smaller on the IS and comparatively a large image will generate for the LED light which is near to the camera.

LED-IDs with the corresponding distance calculation are sent to the lighting server by the smartphone via a wireless fidelity (Wi-Fi) access point (AP). The coordinate information for each LED lights is stored in the lighting server. After receiving a signal from the smartphone, the lighting server matches the LED-ID signal with its stored location information. Then the algorithm can map the location of the smartphone.

Meanwhile, the location of the smartphone may change during this processing time. Therefore, the lighting server uses Kalman filter tracking algorithm which predicts the next possible location from the current location of the smartphone. This location information is sent to the smartphone. Additionally, the LED light’s projected image on the IS changes as the smartphone moves. Therefore, the placement interval of the LED lights should be constant distance to ensure that at least three images of LED lights are available from any location of the room.

4. Channel Modeling and Communication

4.1. Propagation Model of Light from LEDs

The radiation pattern from LED lights is affected by the roughness of the chip faces and geometry of the encapsulating lens. Several models are used to describe the direction of light strength from a light source (i.e., LED). A popular approach is Monte Carlo ray tracing [25]. In a Gaussian (or cosine) power distribution [26], the ray of light is diffusely reflected or refracted. The final radiation pattern of light should appear linear super which are angularly shifted in function of the angle of incidence of every traced ray due to diffusely reflection or refraction characteristic shown in Figure 3.

The energy flux per solid angle known as luminous intensity and transmitted optical power are the two basic properties of a light source such as LED light. The luminous intensity is given aswhere is the luminous flux and is the center luminous intensity of an LED. This luminous flux can be defined as integration between minimum wavelength, , and maximum wavelength, , of working optical spectrumwhere is the standard luminosity curve and is the maximum spectral efficacy for vision. The integral of the energy flux in all directions is the transmitted optical power , given as

Figure 4(a) shows the LED light propagation direction and Figure 4(b) shows the strength of the radiation from a single light source. The power level is maximum 40 dBm at the center of the light source which is represented as yellow in Figure 4(a) and deteriorates to −80 dBm represented as violet, from the center to the edge of the sphere. Joining average power strengths of light at certain - and -coordinate is forming ellipse shape in Figure 4(b). Therefore, weak signal from neighboring light sources causes signal interference. This problem can be mitigated by removing the background noise with a full control on camera shutter speed.

4.2. LED-ID for Optical Camera Communication

Each LED light has a fixed location and the coordinate information for each single LED light is different from the other LED lights in the same room. The coordinates of the LED light are parallel to the world coordinates. These coordinates are analogues to the LED-ID. Every LED light transmits its own ID to the camera which can be declared as a digital tag. For this purpose, the LED acts as a transmitter and camera as a receiver.

The data from the LED light is sent as a modulating signal by varying the intensity of the light using LED driver integrated circuit (IC). This driver controls the light intensity by dimming the LED through a variety of methods. Data from the LED light can be encoded in the phase of LED light signal and phase can be changed by turning the LED light on and off. However, turning the LED light on is not always possible after turning the lights off fully. Therefore, it is recommended to dim the light at minimum intensity. Here, this modulation is known as IM/DD (Intensity Modulated/Direct Detection) modulation [27]. The available modulation technique to transmit signal from LED light sources can be classified as follows:(i)On-off keying (OOK) [28]: the two logic signals in a digital transmission “1” and “0” are represented as high and zero voltage at the transmitter end. This is achieved by applying flickering illumination of the LED light to represent the on-off state of the transmitter.(ii)Pulse width modulation (PWM) [29]: the modulated signal from the LED light is transmitted in the form of a square wave. The desire level of the pulse is obtained by adjusting the LED light dimming.(iii)Pulse-position modulation (PPM) [30]: light from the LED encoded message is transmitted through a single pulse in one-time shifts.(iv)Orthogonal frequency division multiplexing (OFDM) [31]: data is sent as parallel substreams of modulated data using multiple orthogonal subcarriers in a channel.(v)Frequency-shift keying (FSK): a modulated digital signal can be carried by the instantaneous frequency shifting with a constant amplitude.(vi)Phase-shift keying (PSK) [32]: a digital signal can be carried by the instantaneous phase shifting of the baseband signal.

For high speed data transmission channels, OFDM is used where the possibility of multipath fading and intersymbol interference is high [33]. Despite the enormous advantage afforded by OFDM, we do not implement it, because high speed data transmission is not required for most indoor localization applications. In these cases, OOK is a better choice for data transmission from LEDs. Figure 5 shows how, after encoding and modulating the light, the LED light fixture transmits data to the camera of the smartphone. Our system decodes LED coordinates, after demodulating and decoding the received signal from the image processor.

4.3. Channel Modeling for OCC

The pixel of the model of IS [34] can be calculated aswhere represents the energy-per-bit, represents the spectral-noise-density, is the unit pixel value, is the amplitude of the signal, is the camera exposure duration as a ratio of the signal cycle, is a noise term, and , are model fitting parameters for system noise.

Considering the distortion in the channel, the signal-to-noise-plus-interference ratio (SNIR) can be determined aswhere is the average pixel intensity transmitted from the LED light, is the average distortion factor whose value lies between 1 and 0: that is, . indicates minimum signal lost where the LED light is focusing directly on the camera lens and indicates that no image pixel is generated on the image sensor of the camera.

With the additive white Gaussian noise (AWGN) characteristic on the camera channel [35], the channel capacity of the space time modulation can be expressed by Shannon capacity formula aswhere is the frame rate of the smartphone camera and is the spatial bandwidth, which represents how much information is carried by the pixels in each image frame. The spatial bandwidth is equivalent to the number of orthogonal or parallel channels in a MIMO system.

The bit error rate (BER) which depends on the SNIR and the modulation scheme measures the impact of the channel. The noise sources that affect the transmission of the light signal from the LEDs are intersymbol interference, background and transmitter LED shot noise, and thermal noise.

Contemporary smartphones use complementary metal oxide semiconductor (CMOS) based IS where shutter type is rolling shutter. With this shutter technique, light intensities on the IS are captured row by row and the whole image is composed of different pixel array. Therefore, the exposed time delay between pixel array lines records the changing state of illumination of the LED light as a group of pixels in one image. The optical channel DC gain models the channel characteristic from LED lights to the camera which can be determined [36] aswhere is the order of Lambertian emission, is the area on IS, is the distance between an LED and IS, is the optical filter coefficient for signal transmission, is the angle of incidence, and is camera FOV semiangle. Here, can be defined as

It is required to consider the channel noise (which is independent of signal characteristic) and the number of LED-ID signal to calculate the channel output as follows: where and is the camera, responsivity.

The average optical power of the IS of the camera can be calculated as

5. Distance Calculation and Localization

5.1. Distance Calculation between LED Light and Camera

The fundamental operation of a camera is diagrammed in Figure 6 where an image of a target LED light is projected on the IS of a camera. Light from the target LED passes through the camera lens and is concentrated on the IS plate. The projected image on the IS plate is an inverted image of LED light fixture.

Consider that is the focal length of the camera, is the distance from the camera lens to the target LED light, and is the distance from the focal length to the projected image on the IS. Therefore, we can write

The magnification of the lens is the ratio of the projected image size to the geometric size of LED light. If the camera projects a square image on the IS where the height and width of the target LED and the projected image are and , respectively, then the lens magnification can be expressed as

If we ignore the loss in the optical channel, then . More precisely, . By combining (11) with (12), we get

The number of pixels of image is the ratio of the projected image size on the IS to the unit pixel area of the same sensor. If the number of pixels is on the IS, is the unit pixel area of IS, and is the area of the target LED light source, then we obtain

Different shapes of light fixture are available in the market. However, rectangular/square and circle are typical light fixture shapes. For circular shape LED light fixture, if is the radius of the circle, then the area of the light source will become . On the contrary, if and are the width and length of rectangular light sources, respectively, then the area of the light source will be .

If we only consider the LED lights within the smartphone FOV, the distances from the LED light to the smartphone camera are different for each and every light. The projected image from LED light facing the camera straight-on is larger than the image of an LED light located at an angle to the same camera. These distances can be determined by (14) which must be modified due to the relative motion between the camera and LED light fixtures. For each camera, the focal length is , and the unite pixel area of IS is fixed. On the contrary, if we know the real-physical area of the lighting fixture, then we can write (14) aswhere is constant for each camera and LED light.

Figure 7 shows how the image areas are varying with the distance of the LED lights from the camera. From (15), the distance from IS to the LED light fixture is inversely proportional to the square root of the image area on the IS.

5.2. Scenario of Image Area on the IS for Dynamic of Camera

The smartphone is the only moving device with respect to the LED lights in indoor environments. The FOV of the camera shifts with changes in the smartphone’s location. Therefore, the images of LED lights on the IS of camera also change. Additionally, the lighting infrastructure must be designed in a way that keeps at least three LED lights within the FOV of each camera. Figure 8 describes a scenario in which the LED lights detected by the camera vary from 4 to 5 due to the movement of the smartphone from location 1 to location 5.

Square blocks, black dots, and circles represent LED lights, camera, and the FOV of the camera, respectively. For location 1 of the smartphone, five LED lights images are projected on IS. The number of projected images varies from 5 to 4 when the location of the smartphone changes from location 1 to location 5. If more than three LED lights images are projected on the IS, location of the smartphone can be calculated more accurately.

Next, we consider another scenario in which the smartphone moves from location 1 to location 2, as shown in Figure 9(b). Due to the change of smartphone location, the image of the LED lights on the IS also changes. We consider three possible location of LED lights (e.g., red, green, and blue), all attached to the celling. For location 1 of the smartphone in Figure 9(c), the blue LED light is relatively close to the camera. This means that the angle is smaller from camera to the blue LED light. This angle is large for the red LED light compared with the blue LED light. On the contrary, this angel is nearly zero for the green LED light. These distances are calculated using (15).

We describe the straight-line distances of red, green, and blue LED lights from lens of the camera which are , , and , respectively. Furthermore, , , and are the number of pixels on the IS for red, green, and blue light fixtures, respectively, at location 1. Therefore, (15) can be written for each LED light as

From Figure 9(c), the size of image on the IS is larger for the green LED light. This size gradually decreases from the blue to red LED lights. Therefore, comparing one particular image with the other image on IS can be mathematically represented as . As we know from (15), the number of pixels projected on the IS for a certain object depends only on the distance between the camera and the object when other factors remain constant. The mathematical expression for each and every image on the IS and (16)–(18) gives a conclusion: .

At location 2 in Figure 9(a), the angular distance between the camera and the blue LED light fixture increases compared with the red LED light. Therefore, the image size increases accordingly. In contrast, the image size of the green LED light is almost identical due to the small shift in angle.

For location 2, the distance from LEDs lights to the camera lens are notated as , , and for red, green, and blue LEDs, respectively. Moreover, the numbers of pixels on the IS are labeled as , , and . Therefore, if we apply (15) for three LED light images, we will get

From Figure 9(a), the general expression for the number of image pixels is on the IS. From (19)–(21), the distance can be explained as .

5.3. Uploading Information to the Lighting Server

The smartphone begins to send distance information with the corresponding LED-ID to the lighting server via Wi-Fi AP. This information is sent as a packet with two slots where the first slot has the coordinate information of the LED light and second slot has its distance information. The information of LED light coordinates is already stored in the lighting server. After receiving a signal from the smartphone, the lighting server generates a virtual map of LED lights from extracting information from the packet. With the mathematical model of trilateration (or multilateration for more than three LED lights), the lighting server calculates the location of the smartphone.

5.4. Computing Smartphone Location

The beam of LED light propagates from ceiling to floor and illumination spreads 360° from the center of a light source. At a specific distance from the LED light, the intensity of light illumination is equal. If the smartphone is located at any of these locations, the projected image area on the IS will be found the same with the other projected images around that circle. Therefore, the chance of false location identification is higher for a single LED light source. Figure 10(a) shows the original location of the camera with the probable false location of the camera (in a faded image). This false location information creates errors during location mapping in the lighting server.

Consequently, to mitigate location-estimation errors when only a single LED light is visible, we consider another LED light as a reference for the first LED light. From Figure 10(b), we narrow down the location to the torus formed by the intersection of the two spheres. Though more LED lights are available in this scenario, the duplication still occurs. Hence, a third LED light is necessary to deliver accurate location measurements.

With a third LED light, we can narrow the location of the smartphone to one possible location. In Figure 10(c), three circles centered on each of the landmarks overlap at three different locations where the radius of each circle is equidistant from each landmark. Therefore, two other locations of the smartphone along with the original are still possible even with three landmarks. Moreover, other location information may not arise any confusion for smartphone position estimation. It is possible to estimate the smartphone’s location accurately by comparing information from two LED lights with information from a third light.

The method to determine the location of smartphone using three fixed reference points (or LED lights) is known as trilateration [37] or more than three points (which is known as multilateration). For trilateration, the measuring platform is simultaneous with three relevant nonlinear equations. The reference LED lights can be situated either in a triangle or in a straight line from each other.

If is the coordinates of any LED light under the celling, where , and the coordinate information of the smartphone camera is , then the distance from the camera to the LED light can be represented with the following equation:

A matrix can be generated with the above equations for asThe matrix equation can be replaced as

Two different cases can occur when solving the trilateration problem for locating the smartphone. The LED lights can be distributed randomly as in Figure 11(a) or aligned in a straight line as shown in Figure 11(b).

To identify the location of a smartphone from the reference LED lights located at the vertexes of a triangle, the general solution of (24) can be expressed aswhere is denoted as the particular solution and is the real parameter. If is a homogeneous system, then is its solution.

The matrix is written as pseudoinverse matrix format to determine the solution for . On the other hand, the value of can be evaluated using the expression of , , and .

The following solution can be generated after solving (25) as

To identify the location of a smartphone from reference LED lights located in a straight line, the general solution of (24) is expressed aswhere homogeneous system ; and are two solutions with real parameters .

The mathematical expression of (24) is different for the case with more than three LED lights. The solution can be found by solving multilateration problem. The relevant equation can be expressed as follows:

On the base of the least squares methods, the solution of (29) can be found as

In Figure 12, three LED lights are located at three points (i.e., , , and ) in a two-dimensional plane. Their illumination spheres intersect at two points (i.e., and ) that are possible locations for the smartphone camera. The lighting server chooses between the multiple possible location of the smartphone using trilateration.

Figure 13 shows that systematic estimation error is generated when the lighting server estimates the position of the smartphone. The error is minimum for the horizontal bias (in Figure 13(a)) of the indoor environment and is much higher for the vertical bias (in Figure 13(b)). Therefore, system performance is much degraded when measuring vertical position.

Around a cluster of three LED lights, the possible locations of the smartphone are shown in Figure 15. Dotted lines represent the optical links between the camera and the LED lights and solid lines represent the fixed distances between the LED lights. In almost all cases, the distance from the smartphone to each LED light is different. In some cases (Figures 14(b), 14(c), 14(d), and 14(g)), the distance to two LED lights is equal compared with the distance from the other LED light. Additionally, there are few cases (Figures 14(e), 14(f), and 14(h)) where all three distances are different from each other. Concurrently, there is only one case (Figure 14(a)) where camera is equidistant from all LED lights. The algorithm can locate smartphones at these locations without error.

Figure 15 shows an example of the final stage of a server-side process for estimating the smartphone coordinates. Three LED lights are imaged within the FOV of the camera. The distance between each LED light is equal and in our tests this value is 150 cm. The LED light coordinates are , , and , all in cm. Here, the -coordinates are the same for these three LED lights but the -coordinates are all different. We chose these coordinates to simplify this example. The -coordinate is equal for all the LED lights, so we ignore it in our calculations.

Let us consider a smartphone placed between and and far away from . More precisely, this smartphone is closer to than . The distances from the camera to , , and are 320 cm, 336.05 cm, 410.37 cm, respectively, which are measured by calculating the image sizes on the image sensor. The relative distances from these three LED lights show that the smartphone -coordinate is 40 cm away from and is 110 cm away from . In this example, we consider that is located at the origin and the -coordinate of the smartphone is calculated with respect to it. Here, 40 cm is the -coordinate of the smartphone. The -coordinate of the smartphone camera can be measured with the Pythagorean theorem. Therefore, the -coordinate of smartphone can be calculated as 317.5. Finally, the estimated coordinate is .

5.5. Estimate the Next Location of the Smartphone

After finalizing the smartphone coordinates, the lighting server sends coordinate information to the smartphone, updates this information, and stores the information for future use. The location of the smartphone is always changing. While the server estimates the smartphone position, the smartphone may have moved. It is required to run another server-side algorithm in parallel to estimate the velocity, acceleration, and next possible position of the smartphone. We use a Kalman filter to track the next position of the smartphone. This filter depends on the present input measurement instead of previous information (e.g., velocity and acceleration) from the smartphone [38].

Kalman filter is a recursive estimator and linear filter mostly used to approximate errors in navigation applications with minimum variance estimate in a least squares sense under noise processes. Kalman filter gain, current estimation, and new error in the estimation are three important calculations in Figure 16. The Kalman filter gain places the special importance on the error in the estimate and the error in the measurement. On the other hand, the current estimation depends on the previous estimation and the present measured value. The relative importance between previous estimate and present measured value is also fixed by the Kalman filter gain. Furthermore, Kalman filter gain and current estimation are needed to know the new error in the estimate which is passed onto error in future estimation. The preliminary estimated location of the smartphone can be described aswhere is the initial location of the smartphone, is the state (or adaptation) matrix, and is noise added to the initial location.

The measurements and state vectors are weighted by their respective processes’ covariance matrices. The process covariance matrix (or error in the position estimation) can be represented aswhere the initial process covariance matrix is and is noise.

The filter deweights the measured value during large variance and low gain in comparison to the state estimate. This situation leads the filter to prioritize the prediction state rather than measurements. In different circumstances, the measured value is weighted more over the predicted value due to the small variance and high gain. The gain of the Kalman filter is known as the Kalman gain which depends on the error in the estimate and error in the measurement. Kalman gain, , is the ratio of the error in estimate to the total error in both the estimate and measurement,where is the observation or measured error and is transformation matrix, which converts the covariance matrix into Kalman filter gain matrix. The value of Kalman gain lies between 0 and 1 (i.e., ). If is near to 1, it means error in the measurement is nearly 0. In this estimation, the estimates are unstable (large error in the estimate) and measurements are accurate.

The error in location estimate will decrease when the value of is close to 0. Therefore, the difference between estimation and actual is narrowed down. The expression for current estimation can be written aswhere is the present estimate, is the previous estimate, and is the measured smartphone coordinate.

Similarly, if the Kalman filter gain is large, then the present error in the estimate is small. The new predicted state can be defined as follows:

5.6. Postponed Signal Propagation

The smartphone location measurement and signal propagation stops if the smartphone user leaves the room. To recognize this situation, the lighting server broadcasts a message several times and waits for a reply. In the event of no reply, the server stops sending data and stores the position information.

6. Simulation Result

To evaluate the performance of our proposed scheme, we used a smartphone in 1600 sqft. indoor environment. The test instrument specifications are provided in Table 1. The simulation result will vary with the variation of the camera and luminaire parameters.

Figure 17 shows a graphical representation between BER versus SNIR for theoretical and simulation results. Both curves are almost merged together because we ignore the effect of channel noise in our simulation. It explains that BER is increased with the decrease for SNIR of OOK signaling.

In Figure 18, the simulation result shows that initially the location of the smartphone is not identified and is mentioned as unassigned. Estimation accuracy was not good enough for the 1st estimation compared with the 4th estimation. This estimation process is improved sequentially after a few steps. Concurrently, the distance between each estimation is kept between 9 and 10 cm.

The possibility of changing coordinates in the -direction is negligible. Therefore, we only have to calculate the - and -coordinates of the smartphone. In Figure 19, the green line shows the mean value of the smartphone location and red line is the estimated value. The location estimation using Kalman filter for -axis is plotted in Figure 19(a) and -axis of smartphone is plotted in Figure 19(b). Figure 19 states that there is a deviation of location estimation from the mean value of the location. We consider 1 Hz sampling rate and run time is 50 sec. Overall, 50 samples were considered for simulation.

Distance measurement using OCC depends on the size of the projected image of the LEDs on the IS. With increasing the distance, the projected image on the IS occupies less area rather than a shorter distance. Therefore, the possibility of smartphone localization is shrunk if the vertical distance between the smartphone and the LED lights at the ceiling is increased. In Figure 20, when the vertical distance from the camera to ceiling is remaining within 10 m, the image occupied area is greater than or equal to 4-pixel area. After 10 to 35 m, localization possibility is reduced due to decreasing pixel occupied area . Theoretically, it is required to occupy at least unit pixel area of an image sensor. However, it is difficult to ensure that the projected image merges with edge-by-edge of a pixel. Therefore, after 35 m, localization possibility is zero because pixel occupied area remained . We consider a fixed transmitter size and in that case, its image occupies less than unit pixel area after 35 m. If we change the transmitter size, then the distance measurement performance will be changed.

Localization estimation error occurs due to change of smartphone position frequency. We test our algorithm both with Kalman filter and without Kalman filter. Then we generate a plot for comparing the significance between them in Figure 21. A significant deviation on performance was found from the figure. At the initial stage of measurement, both show the same percentage of errors in estimation, whereas in both cases, estimation errors are exponentially decreasing with simulation runtime. At 10 sec period, estimation error is near zero for Kalman filter based estimation. Meanwhile, at the same time, another estimation (i.e., without Kalman filter) shows 50% error in estimation.

7. Conclusion

In this paper, we proposed a smartphone localization system for an indoor environment. Using OCC for smartphone localization is a novel idea. We also use photogrammetry technique along with OCC. The localization resolution for the smartphone is kept within 10 cm. The proposed system relies upon a central processing lighting server for positioning calculations. Signaling from LED light fixtures and localization of smartphone are kept within certain indoor environment. Therefore, this localization scheme is more secure. Additionally, chance of error in the position estimation is more for the system where the implication of Kalman filter is ignored. We included Kalman filter to track the next possible location of the smartphone. Thus, the proposed scheme is more accurate than the existing localization scheme. These lighting fixtures are useful not only for localization but also for illumination for the user. In future work, we will test and evaluate the performance in different environmental scenarios (e.g., escalator and staircase). We will consider variation in height between smartphone and ceiling light fixtures. Meanwhile, we are also trying to optimize the position identification resolution without using Kalman filter to make the system simpler.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the Global IT Talent Support Program (IITP-2017-0-01806) supervised by the IITP (Institute for Information and Communication Technology Promotion).